The construction of a sitemap
...... or any search engine robot
Google will read these these in different formats. As I am just small website I am using a simple text sitemap which I submit using Google Search Console. Well, the sitemap is uploaded to the file root for the website and a request/link to it is specified in GSC. Google will look for a sitemap even if you don't explicitly specify one.
You can specify more that one sitemap and the Google help will tell you how.
I may construct an XML sitemap if the text format one does not give me the results that I expect..
According to Google Console you can configure an XML structured sitemap to instruct Googlebot to spider (crawl) your site at a predetermined interval and time. It also has a provision to detect whether the page has changed.
sitemaps.org
The sitemaps.org website has examples of XML sitemaps. It also describes the Protocol which lists all the options.
It is interesting to note that there is a comment saying that itemz such as are hints not commands. So if you say that you page changes hourly it won't necassarily be crawled hourly - that will depend on how "important" the search engine robot thinks your website is.
sitemap.txt and robots.txt
Looking at my server logs I see that there are lot of 404 errors with robots looking for sitemap.txt and robots.txt. I hace created them and I will add to them as I see appropriate.
The format of the sitemap is very simple and followa the format of AdvancedHTML.co.uk
Another observation on the server logs is that many visits are made be spiders but few visits are made by those making searches. Or at least there appears not to be. While I realise that not using Google Analytics deprives me of determining if a visitor finds my pages making a search, even then I would not know what the visitor searched for. However, it would be good to give the spiders some pages that possible visitors may be searching for. I am not really interested in finding fame but I think that there are those that may find something of interest. The problem is to structure the sitemap so thet the "interesting pages" can be found.
Some of the pages that I think may be of interest are those that I have created after researching more "advanced" teechniques of web design and administration. Pages such as Bubbling and Capturing and the use of asynchronous HTTPRequests
There are very few pages on this website that are likely to be of interest to the majority of Internet users. in fact, I need to have pretty unique content for someone to both find it making a search. Such content would include information on something local that I have made a page about or a technical issue that I think that I have a unique solution for. These are few.
Checking the sitemap.txt file
The spelling and structure of the URL in the sitemap can be checked from Notepad++