How can I get back to the Google homepage in English?
You can access the standard Google homepage by clicking on the "Google in English" link from any of our foreign language homepages.
How can I improve the accuracy or number of results when searching for foreign words?
Google's technology is sensitive to the precise spelling of foreign words. Spelling the word correctly with the appropriate foreign characters will significantly improve the quality of your search results as well as the number of hits. For instructions on typing foreign characters with an English or any other keyboard, visit the following page for instructions: http://www.umass.edu/langctr/keyboardhelp.html.
Why does Google not offer more language choices?
By year-end, the company hopes to offer Google in numerous additional languages. Stay tuned!
How can I suggest Google for a translation or spelling improvement?
Google aims to provide the most accurate translations on our international pages. However, due to the subjective nature of translation, you may encounter wording that does not seem right to you. Please send the specific words along with a possible alternative to intl@google.com and we will consider your feedback when updating our user interface.
How do I search within results?
Sometimes a search is in the right area but gives too
many results. To narrow the results down, you might want to do
a new search that searches only within the URLs returned by the
too-broad search query. This is often called "narrowing a
search" or "searching within the current search results."
Google makes this process easy. Since Google only
returns web pages that contain all the words in your query, to
narrow a search all you need to do is add more words to the end
of your query. This gives you a new query that will
return a subset of the pages returned by the too-broad query.
You can also exclude a word by putting the "-
" operator
immediately in front of the undesirable term.
How can I restrict my search to specific extensions, e.g., .edu?
We are constantly seeking to build upon our search
capabilities, but we currently don't support this kind of
restriction. However, often adding the extension -- that is,
adding "edu" as a query term -- will work quite well. The
"-
" operator may be used to exclude any unnecessary
terms. You may also want to take a look at our special searches.
How do I remove previous searches when I begin a new query?
Fortunately (and unfortunately), this has nothing to
do with Google. It's a feature of Internet Explorer 5.0, which
is possible to disable using its menu. (You can verify this by
going elsewhere on the web and filling out a form on a page, and
then returning to the page on which you filled out the form.
It's a feature of IE 5.0 that can probably be turned off via one
of IE 5's many menus.) Alternatively, you can delete individual
entries from the history by using the
How can I set the default number of hits to 100?
For the time being, this functionality is not supported. We hope to make your search experience so efficient so as to pre-empt the need to scroll through more than 10 results.
Google's order of hits is determined by a combination of several factors, including PageRank priorities. Please check out our Why Use Google page for more detail, or take a look at Larry and Sergey's article The Anatomy of a Large-Scale Hypertextual Web Search Engine for pleasure reading.
What does the RN symbol next to a search result mean?
The RN (RealNames) mark that sometimes appears on the Google search results page reflects a new partnership specifically designed to enhance the Google search experience while maintaining the integrity of a Google search.
RealNames manages a worldwide Internet Keyword registration system. It allocates keywords through an adjudication process that protects intellectual property rights and prevents cyber-squatting. Click here for more information about RealNames' Internet Keyword Approval Policy.
Internet Keywords are brand, product, service or company names that specifically route to the owner's website.
Why has "Show Matches" replaced the "Cached" link ?
You can still access the cached pages by choosing the "Show Matches" link on the search results page. (Google stores many web pages in its cache to retrieve for users as a back-up in case the page's server temporarily fails.) "Show Matches" has replaced "Cached" because the terms that match your query are now highlighted each time they appear on the cached page.
Do I need to submit updated and/or outdated links and pages to Google?
Google updates its index as often as necessary, so updated or outdated link submissions are not necessary. We should be able to pick them up during each crawl.
How do I submit multiple pages?
Please visit our Add URL page to input your URLs. Only the top-level page from a host is necessary; you do not need to submit each individual page. Our crawler, Googlebot, will be able to find the rest!
Why doesn't Google index any of my pages?
Pages that have not been indexed yet probably haven't because not enough other pages on the web link to them -- if other pages don't link to them, we can't assign them a PageRank (our proprietary measure of a page's importance) in a reasonable way. Once other links point to them, we'll pick them up. Google looks at the link interconnectedness among pages and allows the open, vast nature of the Internet to yield the most relevant search results.
What is the amount of time the Google robot takes to index a URL once it is submitted?
Depending on the timing of the submission and our crawl, the entire process can be anywhere from one to four weeks.
Unlike many search engines, Googlebot can return results even if it has not yet crawled that page. Pages that are known but haven't been crawled can be returned as results, but since we have not yet looked at them, their titles aren't shown -- instead, the URL is shown.
How do I request that Google not return cached material from my site?
Google stores many web pages in its cache to retrieve for users as a back-up in case the page's server temporarily fails. Users can access the cached version by choosing the "Show Matches" link on the search results page.
If you do not want your content to be accessible through
Google's cache, you can use the NOARCHIVE meta-tag. Place this
in the <HEAD>
section of your documents:
<META NAME="ROBOTS" CONTENT="NOARCHIVE">
This tag will tell robots not to archive the page. Google will continue to index and follow links from the page, but will not present cached material to users. If you want to allow other robots to archive your content, but prevent Google's robots from caching, you can use the following tag:
<META NAME="GOOGLEBOT" CONTENT="NOARCHIVE">
Note that the change will occur the next time Google crawls the page containing the NOARCHIVE tag (typically at least once per month). If you want the change to take effect sooner than this, the site owner must contact us and request immediate removal of archived content. Also, the NOARCHIVE directive only controls whether the cached page is shown. To control whether the page is indexed, use the NOINDEX tag; to control whether links are followed, use the NOFOLLOW tag. See the Robots Exclusion page for more information.
How do I request Google to not crawl parts or all of my site?
There is a standard for robot exclusion at http://info.webcrawler.com/mak/projects/robots/norobots.html. You
can put a file on your server called robots.txt
that can exclude Googlebot or other "web crawlers." Googlebot
has a user-agent of "Googlebot". There is another standard for
telling robots not to index a web page or follow links on it,
which may be more helpful in some cases, since it can be used
more conveniently on a page-by-page basis. It involves placing a
"META" element into a page of HTML, and is described here;
you can also read what the
HTML standard has to say about these tags. Remember,
changing your server's robots.txt
file or changing
the "META" elements on its pages will not cause an immediate
change in what results Google returns. It is likely that it will
take a while for any changes you make to propagate to Google's
next index of the web.
Why is Googlebot asking for a file called robots.txt which isn't on my server?
Robots.txt is a standard document that can tell Googlebot not to download some or all information from your web server. For information on how to create a robots.txt file, see The Robot Exclusion Standard.
Why is Googlebot trying to download incorrect links from my server? Or from a server that doesn't exist?
It is a property of the web that many links will be broken or outdated at any given time. Whenever anyone types a link incorrectly that points to your site, or fails to update their pages to reflect changes in your server, Googlebot will try to download an incorrect link from your site. Also, this is why you may get hits on a machine that is not even a web server.
Why is Googlebot downloading information from our "secret" web server?
It is almost impossible to keep a web server secret by not publishing any links to it. As soon as someone follows a link from your "secret" server to another web server, it is likely that your "secret" URL is in the referer tag, and it can be stored and possibly published by the other web server in its referer log. So, if there is a link to your "secret" web server or page on the web anywhere, it is likely that Googlebot and other "web crawlers" will find it.
Why isn't Googlebot obeying my robots.txt file?
In order to save bandwidth Googlebot only downloads the robots.txt file once a day or whenever we have fetched many pages from the server. So, it may take a while for Googlebot to learn of any changes that might have been made to your robots.txt file. Also, Googlebot is distributed on several machines. Each of these keeps its own record of your robots.txt file. Also, check that your syntax is correct against the standard at: http://info.webcrawler.com/mak/projects/robots/norobots.html. If there still seems to be a problem, please let us know, and we will correct it.
How do I register my site with Googlebot so it will be indexed?
See the Add URL form.
Why are there hits from multiple machines at Google.com all with user-agent Googlebot?
Googlebot was designed to be distributed on several machines to improve performance and scale as the web grows. Also, to cut down on bandwidth usage we would like to run many crawlers which run on machines close to the sites they are indexing in the network.
For more answers, see the Robots FAQ.
My Googlebot question is not answered here. Where do I send my question?
Please send questions regarding our Googlebot technology to googlebot@google.com.
Home
| About
| Jobs@Google
| Contact Us
©2000 Google Inc.