I was insanely jealous when Teeth
magazine started because their tech was so cool. I'm a little more
controlled over Deepleap because its
the same people and I've already been jealous once.
Deepleap is cool, and potentially even cooler. I think the real innovation
is the backend which decides what services are useful for some selected text
or a current page; the services themselves are just a bonus. I'd very much
like to build on that. Hopefully on their not-yet-there developer's page
it'll emerge that the protocols (promised to be open) behind it are simple and only
require the Deepleap client (and expert system); what doesn't bode well
is that the metainfo
is kept locally. Cache the xml metainfo by all means, but determine who's
metainfo'd up by checking on client requests. The database is distributed in
xml.
I was insanely jealous when Teeth magazine started because their tech was so cool. I'm a little more controlled over Deepleap because its the same people and I've already been jealous once.
Deepleap is cool, and potentially even cooler. I think the real innovation is the backend which decides what services are useful for some selected text or a current page; the services themselves are just a bonus. I'd very much like to build on that. Hopefully on their not-yet-there developer's page it'll emerge that the protocols (promised to be open) behind it are simple and only require the Deepleap client (and expert system); what doesn't bode well is that the metainfo is kept locally. Cache the xml metainfo by all means, but determine who's metainfo'd up by checking on client requests. The database is distributed in xml.