Doing the same thing as the other levels, we view the source and see this:
Not even Google will find it? What does that mean?
Well, if you didn't already know, there are files on web servers called "robots.txt" which tell searches such as Google where not to store information about. Google has one that we can view here. It's pretty much a directory listing of certain stuff that the website wants to keep a "secret" from searches.
After knowing this, lets see if this website has a robots file by going to the URL/robots.txt
So we can now see the website wants to disallow the parsing of the /s3cr3t/ folder... so lets just go straight TO that folder.User-agent: * Disallow: /s3cr3t/
natas4:8ywPLDUB2yY2ujFnwGUdWWp8MT4yZrqzWhich authenticate us to Level 4.