Technically, search engines use a multi-threaded downloader to download and analyze your webpage. That's commonly called a search-spider or search-bot.
It fetches webpages with all required material associated with it, stores it in database and and search engine analyzes them for many factors and arranges them. This process is known as indexing. Google robot comes to index pages at different time interval.
If you are not doing this part right then it's needless to say that you are gaining no ranks. This part is little technical but you can use Google webmaster tools to start with, if you are a beginner.
You may know dynamic pages can't be crawled by all search engine robots. Although Google allows it, still it's better not to use webpages having parameter(s) in its url. If you don't know what it is, recall urls with "?" inside them. Some content management systems (not WordPress) uses such method. Make sure you are not using such a thing. Instead use clean and tidy urls.
You probably know that, it's a simple text file placed in your website server. 2 types of things can be systematically written there. The pages or files that crawler can read and pages or files it can't read.
Any legitimate web crawler always checks this file before crawling webpages. From all the guidelines the main point that comes out is: what to block and what not to?
First wisely decide what is valuable to your seo and what isn't? Accordingly write text in your robots.txt
When crawlers visit any webpage again they downloads it again. But before that they asks the server when the webpage was modified last time? And server sends a response signal with date and time of last modification. Crawler checks the info and decides if it needs to download the page again or not. The information can be kept in header settings of each webpage. If you enable this header settings by asking your web host you will do a favor to search engines and save their time.
In return they will favor you too. It's not a guess, rather it's in Google guidelines.