I am considering a link exchange with a site that has a dynamic delivery of it's links pages. Is Google able to crawl and follow dynamically generated links?
They can read dynamic pages just fine. If not, Amazon wouldn't have a single listing in their index, and I can tell you that Amazon's pages rank quite well.
As long as it doesn't have a noindex meta or isn't blocked by robots.txt, it'll be fine. You can always try getting the cached page in Google if you are really concerned about it.
I'm not a programmer, but get involved in some maintenance of ASP now and then. I am curious how Google would be able to follow the ASP code and access a databases records to figure out what the returned links are and then follow them to the next site.
Correct me if I'm wrong, and again, I am just trying to make sense of this, but the ASP page would not have links until the database has been queried and then the resulting ASP return page would be populated with the query results from the database correct? There are ASP's that open the database, envoke queries and then return the results - in this case links and site descriptions. If that is so, I still can't see how Google can follow and index links that are not hard-coded.
Perhaps your web browser doesn't see any links then, either?
Google, like your web browser, doesn't get the source code to a page... they get the HTML code that is generated. They don't get "Special treatment"... whatever you see when you view the page in your web browser and click "View / Source" will be what their bot is looking at.
If it looks right in a browser, it'll look right to their crawler. A link is a link... and a page is a page... the technology doesn't matter. Regardless if it was created in your HTML editor months ago or created by an ASP script when the page is requested, the browser and the bots get HTML code that is complete and has all of the links showing.
Does that help at all?
Well...no ;>)) Maybe it's the "forest and the trees" thing. If I have a link on my index page to a page called links.asp, this ASP script contacts, opens and invokes database quiries on its records. After the db quiry is run, the resulting return page which is an ASP as well, will then have links that can be viewed and clicked on. However, unless Googles technology can follow the asp script and envoke the quiries to find the records with the links, I'm sorry, I still don't get how they can follow the links.
But I do appreciate your help! I'll talk to our programmer which I only see a couple of times per week.
They don't run asp code... the server does that and spits back HTML code with an asp extension. Go to one of those pages and click "View / Source" and see what you've got... there's no ASP code, no database code, nothing but HTML. The rest is all processed by your web server.Originally Posted by LD
This I do understand - not seeing any ASP code in a returned ASP "page". However how will Google invoke the ASP "page" with the script (which is on the server) that opens the database? Doesn't there have to be a "click" that evokes the page and script (which is on the server) that contacts the database?
The return asp page has HTML code as frame work but only will be populated with the actual links once the database is queried. Again, without invoking the first ASP file, I don't see how the page gets populated so Google can follow links.
Sorry - it must be me - I need coffee!
Any time the links page is requested, the server assembles the page, doing all database calls, etc.
This isn't anything done by Google, a person browsing, etc. The server sees the request and runs the script to make your page.