Navigation Menu+

Lesson 1.5
The Web Before Google

Lesson 1.5
The Web Before Google

LEARNING GOAL: The Basics of Web History

Students will understand how the Internet evolved from a classified government research project into a world wide resource with the help of highly educated teachers and students.

Click Here to View the Learning Goal and Scale


Searching for information on the World Wide Web is sometimes like searching for lost keys in a really big house. If you left the keys on a table, they’ll be easier to see. But if you put those keys in a drawer or they fell underneath a pile of laundry, they’ll be much harder to find. The World Wide Web is filled with piles and piles of information that people have shared through websites and images and videos.

Finding one single piece of information inside of those many piles would almost seem impossible, but when search engines came along, they appeared to make it easy. Somehow, some way, search engines can look through all those piles and tell you who the first person was to walk on the moon or the distance between Orlando and Miami. And these search engines give you answers almost immediately. How is that possible?

During this lesson, you will learn how information on the World Wide Web is organized, how all that information can be searched, and what some of the early search engines were called. You’ll also learn that an important point: search engines can’t answer every question you have. By the time this lesson is finished, you’ll know how the biggest search engine in the world began.



Click here to show/hide contents

Chances are, you probably believe you can find anything on the World Wide Web. Searching is easy, you might even say. So let’s see if you can find answers to the following five questions using any search engine that you feel comfortable (Google, Bing, Yahoo, etc).

To record your answers, come and pick up an index card and a pencil from the table in the front of the classroom. There are only five questions. Answer as many as you can in 10 minutes. There is a reward for the person who can answer all five questions in 10 minutes.

Question #1 – In 2014, Anthony Doerr wrote a book that was published by Scribner. The book was called All the Light We Cannot See. What were the five cities listed under the publisher on the title page?

Question #2 – Back in 2015, Emily Arroyo, a student from Milwee, won two medals at the State Rhythmic Competition. What were the two medals she won and what month did she win them?

Question #3 – On the first day of November 2016, what two kinds of salads were available for students in every Seminole County elementary school?

Question #4 –  On their last Christmas in the White House, President Obama and his wife gave an address. What color dress was the First Lady wearing?

Question #5 – On June 18, 2003, the Parker Pioneer reported that a fire had burned through Parker Valley. How many acres were burned and what time of day did officials say that the fire was contained?


Click here to show/hide contents


Your first quiz will be on Monday, September 11. The quiz will ONLY include questions that were highlighted between Lesson 1.1 and Lesson 1.5. Between now and then, we will enter Lessons 1.6, 2.1, and 2.2, but you will not be tested on those lessons. We will do one more review during Lesson 1.6, so everyone should be ready.


filefolderBy this time, you should know that the Internet is a network that connects computers together all around the world and that the World Wide Web is just a space on the Internet where people share information. Well, webpages and websites are one way of sharing information on the web. Whenever you open up Notepad and save a new page with HTML, you are creating a file that sits inside of a folder and takes up space. Even if that page has images or videos, those images and videos are ALSO just files sitting in a folder somewhere. So in the end, every single website in the world is just a collection of files and folders.


Files on the World Wide Web can be seen by anyone in the world. But most of the files we create (even Microsoft Word documents or PowerPoint Presentations or photographs) are just sitting somewhere in a folder on our computer all by themselves. In other words, the only people who can see those files are people on that computer. We call these local files and local folders because they are in very specific area. In the same way, you might tell someone in another state that you live in a local area called Longwood, Florida or Seminole County.

But finding a local file means going through a series of local folders. For example, if you were told to go to your “misc” folder, you should know by now that this means you have to open your U:Drive, then open your WebDesign folder, then go into your “misc” folder. In other words, you could follow a simple path to find a specific file. This is what we call a “file path.” In fact, your File Path might look something like this.


When you publish a page to the World Wide Web, remember that the page itself is still just a file. When you publish that file (or page), you are putting it into a folder on the web that others can see if they know where to go. And similar to local files in local folders, there is a path. Any file path on the web is known as a Uniform Resource Locator, or URL. Some people just call it a web address. The only thing YOU need to know is that a URL is about a file’s LOCATION. And a URL looks like the link at the top of our lesson today:


urlRemember when we said that the World Wide Web is just a big space on the Internet where people share information? If that’s true and someone wants to share information in that big space, then they have to claim a section of that space. This is what we call the web domain. Just think of it like a king who rules over a kingdom. In that world, the kingdom is a large space that belongs to the king just like is a small section of space on the World Wide Web that belongs to your teacher.

When someone decides to claim a section of space on the World Wide Web, they purchase a web domain. And it’s usually not very expensive (between $7 and $12). Once they purchase a little bit of space to share information (a domain), that’s when they upload or transfer their files and folders onto the domain. Each of the pages are written with some kind of Hyper Text Markup Language (HTML), which is why you see HTTP:// at the beginning of most web addresses. The HTTP stands for Hyper Text Transfer Protocol, which is a fancy way of saying the browser can pull up and translate pages written with HTML.


The World Wide Web organizes websites by their domain category. For example, domains that end in .edu belong to places of education like schools and colleges. Domains that end in .gov belong to the government. Domains that end in .org usually belong to an organization. And so on. Sometimes web designers call this last part of the domain an extension or a suffix because it comes at the end (.com, .ca, .mil, etc).


On the Milwee Middle School website, there is a page that shows directions to our school. If you look carefully near the top of that page, the browser shows this URL:

The page with the map and the mission statement and all those links is still just a single page on the school website, but it is also a file called “SchoolDirections.aspx” located inside of a folder called “AboutOurSchool.” And the “AboutOurSchool” folder is inside of another folder called “QuickLinks.” And the “QuickLinks” folder is inside of another folder called “Home.” And the “Home” folder is on a web domain called Those forward slashes ( / ) are just like the little arrows in your local computer and they simply mean folder.


Click here to show/hide contents


Right now, there are three globes hidden in the classroom like Easter eggs. Raise your hand if you have discovered one of the globes.

Now imagine you were told the same three globes were hidden throughout the Milwee campus. Or if the same three globes were hidden somewhere in Longwood. Or somewhere in Florida. Or the United States. Or the world.

When all you have to do is look around for something for a few minutes (missing car keys, a wallet, or some globes in a classroom), that’s what we call searching. You looked and you found. But when you are determined to find an answer to a problem that can’t be solved with a few minutes of looking around, that’s what we call research. Those who see a problem, but are not really determined to find an answer cannot say they have done any true research because they gave up too quickly. A research paper that doesn’t need you to do anything more than look up two or three websites isn’t really a research paper. That’s just called a search paper.


search-engineSearch engines were created to help people search. That may sound really obvious, but it’s important you understand that being able to look up “the fourth president” or “the red planet” on a search engine is no different than looking up information with an index in the back of a school book. Anyone can do it.

You see, when search engines were first being built, they understood that the World Wide Web was made up of web pages. Lots of them. And those pages were filled with words, just like regular books. And in the back of a book, there is often an index of all the important keywords from that specific book. And if there was an index, then it would almost always show readers a page number where they could go find that keyword. So these search engine companies basically copied that idea and built an index of all the keywords from every web page they could find.

When you begin searching for something on a search engine, you begin with a letter. That letter triggers the search engine to look through its index for all words or phrases that begin with that letter. The reason you see certain words or certain phrases appear is because the search engine is pulling some of the most popular searches from their index under that letter or that combination of letters or that combination of words.

*Testing a Search Engine Index


Search engines are powerful. As we just saw in our search engine test, when someone searches for a word or a phrase or a question, search engines will scan through a really, really big index of words and look for the best answer that matches the search (alphabetically, of course). That almost makes a search engine seem like a library that has everything you ever wanted to know. But as we also learned during our opening SOLO challenge, search engines don’t always know how to give us the answers we’re looking for. More importantly, even the World Wide Web doesn’t have all the information in the world. It only has all the information that has been shared on the World Wide Web.

*What kinds of information might be on the World Wide Web, but almost impossible to find with a search engine?

*What kinds of information might not even be on the World Wide Web?


Click here to show/hide contents


*A Volunteer For the Board

After Tim Berners-Lee introduced the World Wide Web, a space on the Internet where people could share information, students from all over the world tried to build browsers, like Mosaic, that other people could use to see information being shared on the World Wide Web. But as more and more people kept putting information onto the World Wide Web, some designers started building websites about websites. In other words, people started building websites with a single purpose: to help people find other websites. These became known as databases, digital libraries, and search engines.

Here is a list of the 15 Search Engines that were created Before Google:

vlib1. Virtual Library (VLib) was created in 1992 by Tim Berners-Lee to help people find the pages he and others had created up to that point. Even though it isn’t very well known anymore, some people still use it today.

2. Wandex was created in 1993 as nothing more than an index of all the URLs that were on the World Wide Web (basically all the files and their locations on the Web). In the beginning, the index was pretty small. Today, you can see here (pages) and here (sites) that the list is not only huge, but constantly growing.

3. W3Catalog was created in 1993 and was really the first official search engine. It worked a lot like the Virtual Library designed in 1992 by Tim Berners-Lee. The W3Catalog went offline in 1996 just three years later, but someone eventually built a new W3Catalog that is still functioning.

4. Aliweb was created in 1993, but the only way a website could be included was if the creator of a website shared their site with the Aliweb. For this reason, Aliweb never had as much information and it did not succeed very long. There is a sloppy-looking Aliweb search engine you can see here, but it’s not the original.

5. Jumpstation was created in 1993 by Jonathan Fletcher, a college student in Scotland. He came up with a program that could “crawl” the web and look through source codes for keywords in the < title > tags and the < h1 > heading tags. It wasn’t much of a search engine, but his “crawler” program became an important feature of all future search engines, which is why some call Fletcher the “Father of the Search Engine.”

infoseek6. Infoseek was created in 1994, but the designers wanted people to pay in order to search. In other words, they believed that the information on the World Wide Web was worth a price and that people who wanted that information would be willing to pay for it. That didn’t happen. So eventually, InfoSeek was purchased by Disney in 2003 and became

7. Altavista was created in 1994 so that people looking for information could use “natural language” and not get caught up thinking about keywords. During the 1990s, Altavista was one of the most powerful and popular search engines available. By 2003, however, as other search engines were becoming more popular, Yahoo purchased Altavista and it gradually disappeared.

8. Webcrawler was created in 1994 by Brian Pinkerton at the University of Washington. Part of what made Webcrawler unique was that Pinkerton set up a “Top 25” list of the most popular websites being explored on his search engine. By 1995, Webcrawler was purchased by AOL and still manages to survive today, even though it doesn’t quite look the way it once did. Here is a more detailed timeline of Webcrawler history.

9. Yahoo! Search was created in 1994 by David Filo and Jerry Yang, two students at Stanford University. Originally, David and Jerry just wanted to create a list of their favorite sites with hyperlinks that went to those sites. But it wasn’t long before others started adding to the list. Today, Yahoo is still mentioned as the third most popular search engine in the world, which means David and Jerry ended up being very successful students.

10. Lycos was created in 1994 by Michael Mauldin at Carnegie Mellon University in Pittsburgh, Pennsylvania. Some say he was running the search engine out of a computer closet at the school. Mauldin named his search engine after the lycosidae, a Greek word for “wolf spider.” The wolf spider was known for chasing after its prey and not just waiting in the web. All modern search engines now use programs called “spiders” that crawl through the web looking for keywords. And Lycos is currently still the 11th most popular search engine in the world.

11. LookSmart was created in 1995 in Australia as a human directory of popular websites. But what made LookSmart successful was when they started putting paid advertisements into every list of results. For example, a carpet company might have paid LookSmart to put their company at the top of the search results page and every time someone searching for a carpet company clicked on that advertisement, LookSmart would get paid. This process was called pay-per-click and it still exists today. In the end, LookSmart was a search engine that worked more like an online commercial advertisement.

backrub12. BackRub was created in 1996 by Larry Page and Sergey Brin, two students at Stanford University. Together, they spent a year researching all the positives and negatives of search engines that had been created up to that point, and then wrote a Stanford research paper to explain their discoveries. They realized that a search engine could produce better results by giving “authority” to popular sites and then paying attention to less popular sites IF those authority sites linked back to a less popular site (hence the name BackRub). The only problem with BackRub is that it was built at Stanford and was using too much space, so it never really lasted more than a year.

13. Inktomi was created in 1996 by the work of a teacher and a student at the University of California at Berkeley. But with all the other search engines available, Inktomi creators realized that they might just be better off providing technology support for those other search engine companies. Eventually, Yahoo purchased Inktomi in 2003 and after that, no one heard about it again.

ask-jeeves14. AskJeeves was created in 1997 as more of a question and answer website. “Jeeves” was supposed to be a butler who could answer almost any question you asked, but the answers were based on the results of information available somewhere on the World Wide Web. This was the first search engine where someone could type out an actual question like “who was the first man to walk on the moon” and get the answer back: Neil Armstrong. Eventually, Jeeves was dropped and the search engine just became known as

15. The MSN Search Engine was created in 1998 by the Microsoft Network (MSN). The idea behind MSN was the same idea Microsoft had when it first created Internet Explorer: convenience. If people were buying computers with Microsoft Windows and those computers already had Internet Explorer, Microsoft figured that the default web page on every IE browser should be a user-friendly search engine. Their idea worked. By 2009, MSN became Bing and is now the second most popular search engine in the world behind, well, that other search engine everybody uses.


googolWhen their BackRub search engine failed at Stanford in 1997, Larry Page and Sergey Brin decided to purchase a new domain and build a new search engine based on the idea of “back” links. They believed a website should be ranked by popularity. If lots of people were going to a website, then that made it popular. And if that popular site had links to other sites, those other sites would get a little credit for being connected with a popular site. That’s why the links were called backlinks, because they were links that came back from a more popular site.

Basically, the idea behind BackRub was a lot like popularity at school. If you’re not a popular kid, but you start hanging out with someone who is, their popularity can rub off on you. Maybe not enough to make you as popular as they are, but it sure can help. Why? Because before you started hanging out with someone that everyone knew, no one else knew your name. And then, over time, more people get to know you if you stay connected with someone popular.

One day in the Fall of 1997, Larry and Sergey sat with a team of engineers at Stanford and decided they needed a better name for their new search engine. BackRub would never last, so they wrote words on a whiteboard and brainstormed until the moment something finally clicked. Sean Anderson, a member of the group, started thinking of a really big number. He imagined that a really good search engine could index all of the pages in the entire World Wide Web for infinity, so he thought of a 1 with 100 zeros behind it. A HUGE number, he thought.

If “1 with 2 zeros” is 100, and a “1 with 3 zeros” is 1,000, then what number has a “1 with 100 zeros” behind it? If you can find the answer to that question, you’ll discover the most popular word of the last 20 years.


Translate »