Submit Your Article Forum Rules

Page 2 of 13 FirstFirst 123412 ... LastLast
Results 11 to 20 of 128

Thread: The Power of the WebProWorld SEO Community

  1. #11
    WebProWorld MVP Webnauts's Avatar
    Join Date
    Aug 2003
    Location
    European Community
    Posts
    8,934

    Re: The Power of the WebProWorld SEO Community

    Quote Originally Posted by inertia View Post
    OK. I understand all the theory but one thing i dont understand regards the internal flow of pagerank...
    First it is not a theory. It is a evidence-based research fact.

    Quote Originally Posted by inertia View Post
    Do pages blocked with the robots noindex directive still build PR from internal links?
    No.

    Quote Originally Posted by inertia View Post
    If they dont then i can stop using nofollow tags! But as you've just started using them i guess they dont?
    Yes. But again, if someone else is linking to those pages they will build PR to them. And there is were you leak PR. Do you want pages you use the nofollow to have PR? Or do you prefer that the PR would pass to pages of your choice?
    John S. Britsios, Forensic SEO & Social Semantic Web Consultant | My personal blog Algohunters

  2. #12
    WebProWorld MVP inertia's Avatar
    Join Date
    Apr 2006
    Posts
    1,189

    Re: The Power of the WebProWorld SEO Community

    Quote Originally Posted by Webnauts View Post
    First it is not a theory. It is a evidence-based research fact.
    Just a confusion in my English there john. I meant theory like music theory or mathematical theory but saying that it is still theoretical to me as i havent tried it out myself yet!

    Quote Originally Posted by Webnauts View Post
    Do pages blocked with the robots noindex directive still build PR from internal links?
    No.
    But this is whats confusing me.... You're saying that a noindex page can still build and pass PR from outside links. Matt C also says that a noindex page with meta noindex can still build PR so isnt this just a difference in granularity?

    Quote Originally Posted by Webnauts View Post
    Yes. But again, if someone else is linking to those pages they will build PR to them. And there is were you leak PR. Do you want pages you use the nofollow to have PR? Or do you prefer that the PR would pass to pages of your choice?
    You've lost me... Does the noindex robots.txt directive basically mean that i have added nofollow attributes to all the links pointing to the blocked page? Forget about external links for a minute... we've resolved that issue. Lets think about internal pr scuplting here!
    My LinkedIn Profile -- Lancaster Builder

    Twitter: @mattbennettseo

  3. #13
    WebProWorld MVP wige's Avatar
    Join Date
    Jun 2006
    Posts
    2,981

    Re: The Power of the WebProWorld SEO Community

    To kind of restate what John has said above, and to make sure that I am properly understanding the way that pagerank works with current technologies, lets imagine two pages. One page has a link on it (we will call this the source page) and the other page is the page that the link points to (destination page). The flow of pagerank is calculated when the source page is crawled, and is not affected by any factors other than those on the source page; ie, the flow of pagerank can be affected by a nofollow attribute on the link itself, and the flow of pagerank can be affected if the source page is blocked by a disallow directive because Google can't see the links. Google does not consider anything about the destination page when calculating the pagerank a destination page receives.

    EDIT to add: A page that has a link always recieves pagerank. A link that is marked as nofollow (either with a meta tag or a rel attribute) is the only exception. A page that is marked as noindex will still recieve pagerank, that pagerank simply can't be seen (since the page doesn't get put into the index, you have no way of asking Google what the pagerank of the page is - Google still knows that the url was linked to, and pagerank is still allocated for that url because the concept that the page is set to noindex is unknown to Google when the pagerank from the source page is being divided amonst the outgoing links.)

    There are a few different cases that you can encounter a "dead end". The original "dead ends" were pages that had incoming links, but no outgoing links. Pages that have all of their outgoing links marked with nofollow would fall into the same category (nofollow meta tag or all links with a nofollow attribute), as would pages that can't be crawled (since Google can't see the links on the page) due to a disallow robots.txt directive, or pages that go through a 302 redirect (Google does this intentionally, so that 302 redirects can't pass pagerank).

    What Google does with a dead end is they take all the pagerank from all the known dead ends in the index, and then distribute that pagerank amongst all the other pages in the entire index. As a result, a dead end provides little to no actual benefit to the website itself. In fact, it can harm the web site by draining some of the site's pagerank which could be better spent pointing to more important pages. As a result, you would want to keep the pagerank flowing anywhere but to a dead end.

    However, there is no way to automatically stop pagerank from going to deadends - you can only change the flow of pagerank on the source page, never the destination page. The only option is to manually add a nofollow attribute to every link that goes to a dead end - not an ideal option.

    So, what I figured out while typing this overly long summary, is that 403 pages provide a way to recover pagerank that would otherwise go to a dead end. Basically, the trick is in the error message. What you are doing is giving users a way (through authentication) to see the content that you don't want the spiders to crawl, while giving the spiders an error page that has links that recovers the pagerank and sends it to the important areas of your site.
    The best way to learn anything, is to question everything.
    WigeDev - Freelance web and software development

  4. #14
    WebProWorld MVP Webnauts's Avatar
    Join Date
    Aug 2003
    Location
    European Community
    Posts
    8,934

    Re: The Power of the WebProWorld SEO Community

    Quote Originally Posted by inertia View Post
    Just a confusion in my English there john. I meant theory like music theory or mathematical theory but saying that it is still theoretical to me as i havent tried it out myself yet!
    Got ya! I was just teasing you. But I did sound being serious...

    Quote Originally Posted by inertia View Post
    But this is whats confusing me.... You're saying that a noindex page can still build and pass PR from outside links. Matt C also says that a noindex page with meta noindex can still build PR so isnt this just a difference in granularity?
    OK. Lets take an example:

    Page A has PR 4. On page A there is a link to page B without being attributed by the nofollow attribute or so ever. Right?

    Now, on page B there is a link to page C. OK?

    What happens? Googlebot will follow the link on page A to page B but it will see that the page should not be taken into account, but still it will crawl the page looking for a link to pass the PR to. If there is no link or if the link(s) are attributed by a nofollow attribute or otherwise blocked, you will have a dead end (dangling/nodes).

    Quote Originally Posted by inertia View Post
    You've lost me... Does the noindex robots.txt directive basically mean that i have added nofollow attributes to all the links pointing to the blocked page?
    No. That happens only if you would use the nofollow meta tag.

    Quote Originally Posted by inertia View Post
    Forget about external links for a minute... we've resolved that issue. Lets think about internal pr scuplting here!
    Sure. We are only talking about internal PR sculpting.
    John S. Britsios, Forensic SEO & Social Semantic Web Consultant | My personal blog Algohunters

  5. #15
    WebProWorld MVP kgun's Avatar
    Join Date
    May 2005
    Location
    Norway
    Posts
    7,697

    Re: The Power of the WebProWorld SEO Community

    Quote Originally Posted by Webnauts View Post
    Using .htaccess to deny to a folder and to sub folder is a bad thing to do. I mean if you are returning a 403. That is because your are creating a dangling/node page, which are otherwise called dead end pages/folders.
    Is this wrong http://www.kjellbleivik.com/Books/

    .htaccess

    order deny,allow
    deny from all

    except white listed IP's in the /Books/ folder ?

    1. Will the white list create problems?
    2. More precisely: Can Bot's get an URI reference or a dangling node (link) via the white list?

  6. #16
    WebProWorld MVP crankydave's Avatar
    Join Date
    Aug 2004
    Posts
    4,726

    Re: The Power of the WebProWorld SEO Community

    Actually, should be rather easy to test.

    Build and orphan a few pages. Four should be plenty... A,B,C,D

    Link A to B... B to C... C to D

    Noindex A, B, and C

    Throw a single link from a page with a high toolbar PR at A. If the PR "jumps" all the way to D then it will display a high toolbar PR as well. If it passes through all the pages then it won't. Also, remove the noindex and see if there is any change.

    Whether or not a page is noindex really doesn't matter. If there is an external link pointing to it then there is a "probability" that it can be reached by a random surfer. Ergo, it "has" PR once Google finds the link.

    The more links a random surfer has to follow to ultimately get to the "destination" page, the less probability it will be reached... less PR.

    Dave

  7. #17
    WebProWorld MVP kgun's Avatar
    Join Date
    May 2005
    Location
    Norway
    Posts
    7,697

    Re: The Power of the WebProWorld SEO Community

    Quote Originally Posted by Webnauts View Post
    First it is not a theory. It is a evidence-based research fact.
    Is that possible, research without a theory?

  8. #18
    WebProWorld MVP wige's Avatar
    Join Date
    Jun 2006
    Posts
    2,981

    Re: The Power of the WebProWorld SEO Community

    Quote Originally Posted by kgun View Post
    Is this wrong http://www.kjellbleivik.com/Books/

    .htaccess

    order deny,allow
    deny from all

    except white listed IP's in the /Books/ folder ?

    1. Will the white list create problems?
    2. More precisely: Can Bot's get an URI reference or a dangling node (link) via the white list?
    If you whitelist the search engines, then yes, you will be fine from a pagerank standpoint. However, if the bots just get the default error message that non-whitelisted users get, then the page is a dead end. The custom error document is not being displayed, so there are no links to recieve the pagerank that is going to this page.
    The best way to learn anything, is to question everything.
    WigeDev - Freelance web and software development

  9. #19
    WebProWorld MVP Webnauts's Avatar
    Join Date
    Aug 2003
    Location
    European Community
    Posts
    8,934

    Re: The Power of the WebProWorld SEO Community

    Quote Originally Posted by wige View Post
    To kind of restate what John has said above, and to make sure that I am properly understanding the way that pagerank works with current technologies, lets imagine two pages. One page has a link on it (we will call this the source page) and the other page is the page that the link points to (destination page). The flow of pagerank is calculated when the source page is crawled, and is not affected by any factors other than those on the source page; ie, the flow of pagerank can be affected by a nofollow attribute on the link itself, and the flow of pagerank can be affected if the source page is blocked by a disallow directive because Google can't see the links.
    If I understood correct, we agree.

    Quote Originally Posted by wige View Post
    Google does not consider anything about the destination page when calculating the pagerank a destination page receives.
    Exactly.

    Quote Originally Posted by wige View Post
    EDIT to add: A page that has a link always recieves pagerank. A link that is marked as nofollow (either with a meta tag or a rel attribute) is the only exception. A page that is marked as noindex will still recieve pagerank, that pagerank simply can't be seen (since the page doesn't get put into the index, you have no way of asking Google what the pagerank of the page is
    I think I must disagree at sme point. A page that is not indexed can not have PR its self. But the PR coming to the noindex attributed page will flow to the pages linked from it.

    Quote Originally Posted by wige View Post
    Google still knows that the url was linked to, and pagerank is still allocated for that url because the concept that the page is set to noindex is unknown to Google when the pagerank from the source page is being divided amonst the outgoing links.)
    I am not sure if I understand.


    Quote Originally Posted by wige View Post
    There are a few different cases that you can encounter a "dead end". The original "dead ends" were pages that had incoming links, but no outgoing links.
    Pages that have all of their outgoing links marked with nofollow would fall into the same category (nofollow meta tag or all links with a nofollow attribute), as would pages that can't be crawled (since Google can't see the links on the page) due to a disallow robots.txt directive, or pages that go through a 302 redirect (Google does this intentionally, so that 302 redirects can't pass pagerank).
    Correct.

    Quote Originally Posted by wige View Post
    What Google does with a dead end is they take all the pagerank from all the known dead ends in the index, and then distribute that pagerank amongst all the other pages in the entire index. As a result, a dead end provides little to no actual benefit to the website itself. In fact, it can harm the web site by draining some of the site's pagerank which could be better spent pointing to more important pages. As a result, you would want to keep the pagerank flowing anywhere but to a dead end.
    Exactly.

    Quote Originally Posted by wige View Post
    However, there is no way to automatically stop pagerank from going to deadends - you can only change the flow of pagerank on the source page, never the destination page. The only option is to manually add a nofollow attribute to every link that goes to a dead end - not an ideal option.
    I agree.

    Quote Originally Posted by wige View Post
    So, what I figured out while typing this overly long summary, is that 403 pages provide a way to recover pagerank that would otherwise go to a dead end. Basically, the trick is in the error message. What you are doing is giving users a way (through authentication) to see the content that you don't want the spiders to crawl, while giving the spiders an error page that has links that recovers the pagerank and sends it to the important areas of your site.
    You mean creating a custom 403 template with links to pages I want Googlebot to crawl and pass the incoming PR? I am not sure how can that work. Or am I confused?
    John S. Britsios, Forensic SEO & Social Semantic Web Consultant | My personal blog Algohunters

  10. #20
    WebProWorld MVP kgun's Avatar
    Join Date
    May 2005
    Location
    Norway
    Posts
    7,697

    Re: The Power of the WebProWorld SEO Community

    Quote Originally Posted by wige View Post
    However, if the bots just get the default error message that non-whitelisted users get, then the page is a dead end. The custom error document is not being displayed, so there are no links to recieve the pagerank that is going to this page.
    And your solution (redirect) is?

    What were discussed in these two threads:

    http://www.webproworld.com/web-progr...tml#post411565

    http://www.webproworld.com/search-en...tml#post405790

Similar Threads

  1. PHP 5 Power Programming.
    By kgun in forum Web Programming Discussion Forum
    Replies: 0
    Last Post: 04-02-2008, 02:30 PM
  2. Google: Brain power, energy power, hardware and software.
    By kgun in forum Google Discussion Forum
    Replies: 2
    Last Post: 09-04-2006, 07:19 AM
  3. Hello webproworld member community....
    By Roubina in forum Introductions
    Replies: 2
    Last Post: 09-21-2004, 04:37 PM
  4. Google Power
    By MrLeN in forum Google Discussion Forum
    Replies: 10
    Last Post: 04-12-2004, 04:42 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •