A user from Copenhagen asked Google whether or not a page that was concealed using robots.txt can still receive PageRank credit through links from public pages. Matt Cutts provides a clear answer to this inquiry and explains how it all works.

Webmasters are often preoccupied with boosting their site’s search ranking but there are instances when they try to do the opposite — that is, hide certain pages from web crawlers. This is usually the case for test pages and other private content. The crawlers must be told to refrain from going through these portions by specifying them in a file called robots.txt. Google obeys the instructions written on this file but be aware that other crawlers may interpret it differently or ignore it completely.

Although a page is not crawled, if other pages are linking to it then it will still collect link juice and may appear on search results. Matt cited the case of the California DMV website that used to block the GoogleBot a couple of years ago. However, several people linked to it and its relevance to particular keywords was unmatched so the site remained a top result in related queries.
Video Link:
Will a link to a disallowed page transfer PageRank? –