I'm finally "getting the hang" of retrieving information from the Wikidot API. However, I'm having an issue in one of my applications. The application needs to retrieve all of the pages in a particular category—over 600 pages with many more coming. It then needs to extract information from those pages, process the information, and return a particular set of pages to the application. I'm already starting to have problems with a page failing to download and that breaking the entire process.
I started to rewrite the application to use a RESTful data cache that is provided by the service provider. However, I just realized that there isn't much difference between the data cache API and the Wikidot API. The data cache uses a basic key-value paradigm. Wikidot pages can be thought of as data objects, particularly for data-form categories. I would still be using HTTP calls to retrieve and store data.
So I now have this idea: Instead of using this data cache, use Wikidot to "cache" my data. Push my computed data back into Wikidot. Revise my application so that I can push data back into Wikidot on demand.
Which of these approaches should I use, or is there another option I should try?