Difference between revisions of "Zimit"

From openZIM
Jump to navigation Jump to search
m
Line 11: Line 11:
== Principle ==
== Principle ==


Rhe principles of Zimit are:
The principles of Zimit are:
* Crawl the remote WebSite to retrieve all the necessary content
* Crawl the remote WebSite to retrieve all the necessary content
* Save all the retrieved content in WARC file(s)
* Save all the retrieved content in WARC file(s)

Revision as of 12:42, 23 May 2023

Zimit is a tool allowing to create a ZIM file of "any" Web site.

Context

openZIM provides many scrapers software solutions for dedicated source of content like: TED, Wikipedia (Mediawiki, Project Gutenberg, ...). This is a great solution to provide quality ZIM files, but developing and maintaining each of them is costly.

Zimit is our approach to allow to scrape "random" Web site and get an acceptable snapshot to be used offline.

One important point is that specific javascript embeded pieces code, in particular to read videos, continues to work.

Principle

The principles of Zimit are:

  • Crawl the remote WebSite to retrieve all the necessary content
  • Save all the retrieved content in WARC file(s)
  • Convert WARC file(s) to one ZIM file (this implies embedding a reader in the WARC file, so this is a kind of offline Web App)
  • Read the ZIM file in any Kiwix reader

Player

  • the SW is installed on the welcome page. If any page is loaded and the SW still not loaded, a redirection to the homepage will happen to load the SW and then automatically come back to the original page. Do achieve to do that, each page HEAD node is modify to insert the appropriate piece of Javascript at the time of the warc 2 zim conversion.
  • In the reader Wabac.js, there is only one specific part related to ZIM content structure and this is in "RemoteWARCProxy". This part knows how to retrieve content from the specific ZIM storage backend. For the rest the code is the same as before.
  • Regarding URL rewriting itself, we have two kinds which are both data-drivent:
    • The static URL rewriting which is done with Wombat
    • The Fuzzy matching which is done within the ServiceWorker
  • The URL rewriting is done at two levels:
    • When the javascript code calls specific Browsers API, these calls are superseeded and ultimatively call Wonbat
    • When a URL is called, then it goes through the service-worker which does the fuzzy-matching and the URL rewriting.

Source code