Simply we can say Search Engines stores(indexing) all the webpages in its database. And it displays the webpages links in the search engine results page(SERP) when a related query search is done.
After creating the website we submit it to the Search Engines like Google, Yahoo, MSN, etc. The URL's are stored in URL Server and it send the URL list to the Crawler(Google bot, Spider Viewer). Crawler crawls(downloads) the webpages. The downloaded pages are send to the repository and then to the indexer. The Indexer calculates the Page Rank and stores them in an order based on keyword and page rank. When we search for the particular query(keyword) the search engine retrieves the webpages which are related to the query search that has done by the user and displays them in the search engine results page based on the order of the page rank.
The search engines follows different algorithms for crawling, indexing, processing, page rankcalculation and retrieving.
Search engines can understand only text which is written in the web page. It cannot understand the Images, Flash, Animations, Frames, Javascript content etc.
The search engine revisits the webpages for every 15days by default. If frequently the website is updated then it will visit our webpage frequently.
For example: Search Engine crawls the News Portals frequently for every few hours because they are frequently updated for every 1 or 2 hours.