نسخه شماره 5.5.2

۳۰ خرداد ۱۳۹۷

نسخه جدید خزشگر به همراه ارائه سرویس دایرکتوری وب ایران

نسخه شماره 5.0.1

۱۳ خرداد ۱۳۹۷

نسخه شماره 5.0.1 با تغیرات مختلف در ظاهر و اضافه شدن سرویسهای جدید ارائه شد.

نسخه 4.0.0

۱۲ خرداد ۱۳۹۵

نسخه شماره 4.0.0 خزشگر با راهکاری سئو و فروش خدمات آنلاین سئو برای اولین بار در ایران.

نسخه شماره 3.0.0

۱۴ بهمن ۱۳۹۴

ارائه نسخه شماره ۳ خزشگر بر پایه ربات های جدید خزشگر به شکل سرویس اتوماتیک

نسخه دوم خزشگر بر پایه سرویسهای جدید

۱۲ آبان ۱۳۹۴

راه اندازی نسخه دوم خزشگر با ارائه سرویس ثبت ربات خزشگر و ارائه دایرکتوری وب ایران

راه اندازی استارت‌اپ خزشگر

۱۴ فروردین ۱۳۸۹

راه اندازی استارت‌اپ خزشگر

محصولات متن باز

  • Frontera is web crawling framework implementing crawl frontier component and providing scalability primitives for web crawler applications.
  • GNU Wget is a command-line-operated crawler written in C and released under the GPL. It is typically used to mirror Web and FTP sites.
  • GRUB is an open source distributed search crawler that Wikia Search used to crawl the web.
  • Heritrix is the Internet Archive's archival-quality crawler, designed for archiving periodic snapshots of a large portion of the Web. It was written in Java.
  • ht://Dig includes a Web crawler in its indexing engine.
  • HTTrack uses a Web crawler to create a mirror of a web site for off-line viewing. It is written in C and released under the GPL.
  • mnoGoSearch is a crawler, indexer and a search engine written in C and licensed under the GPL (*NIX machines only)
  • news-please is an integrated crawler and information extractor specifically written for news articles under the Apache License. It supports crawling and extraction of full-websites (by recursively traversing all links or the sitemap) and single articles.[61]
  • Norconex HTTP Collector is a web spider, or crawler, written in Java, that aims to make Enterprise Search integrators and developers's life easier (licensed under Apache License).
  • Apache Nutch is a highly extensible and scalable web crawler written in Java and released under an Apache License. It is based on Apache Hadoop and can be used with Apache Solr or Elasticsearch.
  • Open Search Server is a search engine and web crawler software release under the GPL.
  • PHP-Crawler is a simple PHP and MySQL based crawler released under the BSD License.
  • Scrapy, an open source webcrawler framework, written in python (licensed under BSD).
  • Seeks, a free distributed search engine (licensed under AGPL).
  • Sphinx (search engine), a free search crawler, written in c++.
  • StormCrawler, a collection of resources for building low-latency, scalable web crawlers on Apache Storm (Apache License).
  • tkWWW Robot, a crawler based on the tkWWW web browser (licensed under GPL).
  • Xapian, a search crawler engine, written in c++.
  • YaCy, a free distributed search engine, built on principles of peer-to-peer networks (licensed under GPL).
  • Octoparse, a free client-side Windows web crawler written in .NET.

آخرین ویرایش در ۱۴ خرداد ۱۳۹۷ ساعت ۱۲:۰۳ توسط مدیر سامانه