Ugh!!! Last night, I received an e-mail from Google Alerts, with the list of pages they found for the site lejoslearning.com. It looked normal enough, but it was an old article so I checked it out.
It was a copy of the article, as it appears on my test site. My test site is set up as a subdomain.
http://testsite.lejoslearning.com (not the actual subdomain but you get the idea.
I just checked Google Webmaster tools robots.txt file and it has the correct folder disallowed, as shown in the example below.
But when I go to Google and type in site: testsite.lejoslearning.com it gives me 180 pages in the index.
What am I doing wrong? Do they treat a subdomain as its own separate domain, which would require its own separate robots.txt file? I am confused. I have a feeling this would cause some serious duplicate content issues.
Please advise your thoughts. Thanks.