Settings gear inside a search engine lense, sysbol of web crawling.

Crawl Settings

Settings of crawler robots from search engines.

Crawl Settings

Basics of web crawling

Technically, search engines use a multi-threaded downloader to download and analyze your webpage. That's commonly called a search-spider or search-bot.
It fetches webpages with all required material associated with it, stores it in database and and search engine analyzes them for many factors and arranges them. This process is known as indexing. Google robot comes to index pages at different time interval. 
If you are not doing this part right then it's needless to say that you are gaining no ranks. This part is little technical but you can use Google webmaster tools to start with, if you are a beginner.

URL and the Parameters it has

You may know dynamic pages can't be crawled by all search engine robots. Although Google allows it, still it's better not to use webpages having parameter(s) in its url. If you don't know what it is, recall urls with "?" inside them. Some content management systems (not WordPress) uses such method. Make sure you are not using such a thing. Instead use clean and tidy urls.

robots.txt File and It's Proper Usage

You probably know that, it's a simple text file placed in your website server. 2 types of things can be systematically written there. The pages or files that crawler can read and pages or files it can't read.
Any legitimate web crawler always checks this file before crawling webpages. From all the guidelines the main point that comes out is: what to block and what not to?

An image of gift box, stands for robots block.
Don't block The Important

First wisely decide what is valuable to your seo and what isn't? Accordingly write text in your robots.txt
Here is a fine resource for getting into technical details. More over, never block any associated Stylesheet or JavaScript files until it's very necessary. Google-bot needs to see the rendered webpage, not only it's tags, text, images and videos.

If-Modified-Since Header Data and Importance

When crawlers visit any webpage again they downloads it again. But before that they asks the server when the webpage  was modified last time? And server sends a response signal with date and time of last modification. Crawler checks the info and decides if it needs to download the page again or not. The information can be kept in header settings of each webpage. If you enable this header settings by asking your web host you will do a favor to search engines and save their time.
In return they will favor you too. It's not a guess, rather it's in Google guidelines.

Folder sitting on ball bearing handled basement. Depcting settings gear and clock pointer.
With 1000+ active installs, an useful plugin for on-page seo insight