
While I'm ok with XML and reading/writing streams, I'm working on a test model that will aggregate some of the useful RSS links that am subscribed to. What am finding not easy to understand is how excatly the syndication process works, or rather the visual concept. Am looking for feedback/advice from someone who has built their own aggregator. I'm avoiding a practical test situation where too much polling would get me onto the restricted list of the servers. These are the questions, in no particular order: - Does a website publish an xml file as the syndication format and does the syndication file get stored on a local dir on the server awaiting request? - When the files are called with the http get command, how does the syndication format know of the last update and therefore releases as a stream? - Once the xml file has been downloaded onto the client machine, what distinguishes the latest from the older files and how does the client xml stream reader know about this update and which file to parse? Any help will be appreciated.