Difference between revisions of "Content team"

From openZIM
Jump to navigation Jump to search
Line 57: Line 57:


=== Scraping ===
=== Scraping ===
* Scraping leadership means the initiative should come from the content team
* First analysis of error should be done by content team
* If error in scraper is suspected
** Issue should be updated to corresponding scraper code repository
** Scraper problem analysis does not super-seed in any manner content request
* ZIM quality should be vetted against publishing policy
* Any recipe should run successfully first in dev before been put in production
* Hardware ressources should be saved


=== Library Management ===
=== Library Management ===

Revision as of 18:26, 18 January 2024

The Content team gathers people in charge of providing books in the ZIM format ("books" being understood here as web content stored as single web archives).

Purpose

Provide web-based educational content to people without internet access, and make the experience as seamless as possible. Access and discovery must be user-friendly and market ready, the content up-to-date and as portable as can technically be.

Goals

  • Book curation must remain focused on educational material, broadly construed;
  • Books should have proper visual formatting;
  • Books should be up-to-date;
  • The Kiwix Library should allow easy and friendly discovery of content.

Responsabilities

  • Content Requests
    • Collaborate with requesters to qualify requests properly. Keep them informed.
    • Ensure we are allowed and able to fullfill requests
    • Initiate new recipes and manage first publishing if new book
    • Collaborate with scraper dev. team if necessary
    • Keep the tickets up2date
  • Scraping
    • Ensure Zimfarm works fine and contribute to its improvements with dev. team
    • Analyses failures or unexpected behaviors
    • Ensure recipes run properly, fix configuration when necessary and contribute to scraper improvements with dev. team
    • Ensure workers are online and are properly configured
    • Ensure scrapes lifecycle is correct (Reasonable pipeline size, Running scrapes progressing appropriately, not too many failures)
  • Library management
    • Ensure ZIM filenames and location (paths) are correct
    • Ensure ZIM Metadata are correct
    • Ensure ZIM are recent and kept up2date (AFAP)
    • Ensure library is coherent and user-friendly

Policies

Publishing

  • Content has to be legal in Switzerland
  • Content should not advertise fringe theory
  • Content should betterne free content
  • If not free, content should be:
    • Open content OR
    • Educational content OR
    • has an authorization of reproduction
  • Any content we publish should
    • have (almost) no user visible error
    • have proper/correct metadata
    • be easily discoverable in the public library

Content Requests

  • Allow everybody to request new, changes or deletion of content
  • In full transparency track the lifecycle of our content portfolio
  • New content should be assessed and vetted content against publishing policy (see above)
  • Content requests should be closed:
    • when fully implemented (user visible)
    • if refusal or impossibility of implementation
  • ZIM Medata should be given for new content
  • Only once all prerequisites are satisfied, then start with scraping

Scraping

  • Scraping leadership means the initiative should come from the content team
  • First analysis of error should be done by content team
  • If error in scraper is suspected
    • Issue should be updated to corresponding scraper code repository
    • Scraper problem analysis does not super-seed in any manner content request
  • ZIM quality should be vetted against publishing policy
  • Any recipe should run successfully first in dev before been put in production
  • Hardware ressources should be saved

Library Management

Processes

Content Requests

Scraping

Library Management

Worflows

Members

See also