A front-page article by ArnoldIT.com's Stephen E. Arnold highlights the newest versions of the Thunderstone Search Appliance and the Google Search Appliance in the printed July/August edition of KMWorld, a publication of Information Today, Inc.
The author says Thunderstone's high-performance, flexible appliances give administrators and developers excellent control over the system – with strong document-level security, feature-rich tuning controls and ability to schedule, stop, pause or configure database crawls in the same way as they can for file servers, Web servers, intranet servers, etc.
You can read the KMWorld article, entitled "Making room for appliances," online here.
We welcome the following organization to our growing Thunderstone Channel Partner Program:
(For Search Appliances and Webinator)
Tredale
1300 737 078
http://www.tredale.com.au
Development work continues on 2009 Thunderstone Software releases of:
"What we have in this particular case is a Native American user group thesaurus language. It's been developed, and it can be added to. The more that it's used – and you put that feedback loop back into this thesaurus – the smarter it becomes. And it starts to create, with this new millennium, a written mind that parallels the thesaurus user group's community. This is something that TEXIS is equipped to deal with that the other stuff out there is not equipped to deal with. It's part of its strength."Kathy Pincus
Chief Technology Officer
Mnemotrix Systems, Inc.
http://www.NativeAmericanInstitute.org
Robots.txt refers to a way for a website to indicate to crawlers what parts of the site it would like the crawler to stay out of. This is not specific to Thunderstone software, and it applies to other web crawlers too. There are two ways of accomplishing this:
Note that these are guidelines – they do not create technical restrictions that prevent a crawler from descending into a directory or following links, and they should not be used for security purposes.
All Thunderstone products obey robots.txt and meta robots by default. Sometimes you need to index content you don't control that has a robots exclusion on it. If you'd like to ignore the robots reccomendations and index the content, there are Walk Settings that allow this:
Feedback, suggestions and questions are welcome. Send your email to