The Web continued to develop as fresh people saw new possibilities. The big problem was finding anything. In the early days there may have only been one Website but the number grew so rapidly that you just didn't know what was out there. The original idea was that the hyperlink would enable you to navigate your way to information relevant to any subject you were interested in. This was the hypothesis of the early pioneers - that the hyperlink was enough to organize all of human knowledge. Unfortunately they didn't count on the haphazard growth of the network of links. It was becoming difficult to find things.
In 1994 Brian Pinkerton created a small application, a Web crawler, that searched the Web and created indexes of what was available. It was the first search engine and in 1995 it was sold to America Online and then in 1996 it was bought by Excite Inc.
At about same time two students at Stanford University, David Filo and Jerry Yang, started a more ordered Web catalog called - "Yet Another Hierarchical Officious Oracle" or Yahoo!
You may not believe this but much of the organization and construction of these early indexes was implemented by humans scanning the links that the crawlers found. It was really only when Google in the form of Larry Page and Sergey Brin invented page rank did fully automated Web search appear. Even today Google uses lots of humans to help in the process - something it doesn't discuss much.
It is still the case that the big problem for the Web is finding things.
Microsoft joins in
In 1995, Netscape released Navigator 2 and Sun released Java – only now did Microsoft wake up to the fact that it was being left behind.
For reasons that we have to guess at, Microsoft had convinced itself that the Internet was a passing fad and one that it could sweep aside by establishing its own network – MSN (MicroSoft Network).
An MSN client was included in Windows 95 and while it might have looked like a Web browser the technology was non-standard and not based on HTML. It can best be described today as an example of a connected App. You ran it on your desktop and it gave you access to pages of information that Microsoft provided.
Microsoft believed that MSN and the Internet could coexist and users would subscribe to MSN because it provided content that was way beyond what you could get on the Web. It didn’t take long, however, for Microsoft to realise that there was the Web and then there was MSN and most users wanted the Web.
The Browser Wars
In a rapid response that many interpreted as desperation Microsoft released Internet Explorer 1. By 1996over 75% of Web users were using Navigator and, as Explorer 1 wasn’t very good, it didn’t seem likely that Microsoft would regain the ground that it had lost by ignoring the Internet.
But everything with Netscape wasn’t quite as wonderful as it seemed. Microsoft was giving Explorer away with no ifs or buts. It was straightforwardly free no matter who you were or what you were using it for. Netscape Navigator on the other hand was free if you were a student but otherwise it cost something. There was always a free beta to download if you didn’t want to pay for the real thing. If you did pay up then the bad news was that Netscape continually changed its pricing policy and it was inconsistent – a way to irritate your paying customers if ever there was one.
This was the start of era that saw a new beta every three months and the use of the Web and its users as a huge pool of software testers. It was also the start of the Web as a software-marketing tool.
Netscape’s management was also convinced that its product was better and had such a huge market lead that there was nothing it need do to ensure it kept that lead. Microsoft on the other hand had Bill Gates to drive it to eventual success. Explorer came with Windows and remained free. Version 3 was a lot better than 1 and it introduced lots of new technology, including ActiveX. Microsoft attempted to push things so fast that it pre-empted standards and tried to establish its own technology as the de-facto standards.
The reason for the panic is the recently resurrected idea (by Google) that a Web browser could rule them all.The Web mania was gripping everyone so much that it seemed entirely likely that a browser might well usurp Windows as a the most popular operating system. A browser could run programs in the form of Java Applets and it, in theory at least, didn’t need an operating system. Any machine plus a browser was complete in itself and the distinctions between Windows, Linux, Unix or whatever would vanish.
This clearly was not a nice prospect for Microsoft and so it innovated to produce not only a better browser but a Windows-specific browser. With the overwhelming success of Explorer the idea that a browser was and is all you need more or less faded from the consciousness of Microsoft and very nearly everyone else - but it is still just as true and Firefox OS and Chrome OS are proving that you don't need a traditional OS at all.
Netscape lost market share and was eventually bought by AOL. Meanwhile Microsoft integrated the browser so closely with Windows that it invented the “Active Desktop” in which the desktop became a browser. Other new ideas, most of which died a death, were “push” - the ability of servers to send data that clients never requested, and channels - a move to the TV model for the browser.
Users seem to have resisted all attempts to turn the Web into something passive and simply a marketing tool. Still Microsoft’s attempts to make the Web central to Windows eventually landed them in court for anti-competitive practices.
The real and lasting problem with the Web isn’t a purely technological one. It’s all a matter of content. Users often complain that when they are looking for information on the Web what they find are lots and lots of links that go to other pages of links, to yet more links and when you find a link that doesn't go to another page of links it’s often a broken link.
The problem is that when it comes to information provision the Web is very much an amateur affair with the addition of a few sites produced by academic or government funded institutions that provide real information. Otherwise the Web survives on manufacturers’ support services, online shops, marketing and enthusiasts. Perhaps this is enough. There are some amazing examples that show that you don't need a professional infrastructure to create good and reliable content - Wikipedia for examples.
It might be if search was good. All of the major search engines to a terrible job of directing users to the sites they might be looking for. Some of this is because of commercial influences - paid search results and personalized search. But this also underestimates the difficulties of working out what is relevant and what is high quality. There are lots of Web blogs that are really useful but get virtually no traffic and so die from lack of interest. This is the fault of the search engine. At the moment all we can do is promote the use of social media to pass on links that have been found to be interesting. This works once the initial discovery has been made.
Users expect the Web to be free and attempts to launch information services that have to be paid for meet with little enthusiasm. For a time advertising promised a partial solution but this produces too little revenue to fund quality information. If the Web is to improve the next challenge is finding a way to fund content rather than invent new technology.
Flossie is the nickname given to a historic ICT 1301 computer that has recently been rescued from the scrap heap for the third time in its 50-year history. It has now been donated to The National Muse [ ... ]
In the mid 70s it was revealed that British Intelligence had been using machines to break the German coded communications - and one of the machines was an early electronic computer called Colossus. Ca [ ... ]