This is the 44th blog in a series about security and how security is about how you think.
The world has just celebrated the 75th anniversary of the first all-digital programmable computer – ENIAC. This invention changed the world – first in the field of mathematics and then every other field and occupation since that memorable event. But how has security changed?
ENIAC had some very simple security – complexity and obscurity. No one walking into that room in the basement of the University of Pennsylvania would have had any clue about how to tamper with (successfully) or steal the information that was contained in that room. Securing the computer system and preserving the relevant knowledge was as simple as locking the door and restricting access to only those engineers, physicists, mathematicians and support personnel who had a right to be in that room.
For many years, that was the security model that existed – big computers were locked in separate rooms to which only a select few would be granted access. Even if someone could get into the room, getting the computer to do anything useful or understanding the data that it handled took specialized knowledge. You couldn’t buy a punch card reader on eBay to read the data and no one had the computers in their basements to make sense of the data anyway.
One of the first milestones that changed the security model of computers and computing was the development of the programming language. This allowed the programming of the computer system to be abstracted into a specialized language that more people could understand and use. The languages of ALGOL and COBOL enabled more people to use and program computers as those devices started to propagate across many industries. More companies had a computer system or access to one to help their businesses thrive.
The next milestone that changed the security model of computing was development of the Internet – not necessarily the World Wide Web (WWW) of the 1990s, but the ARPANET of the late 1960s, which changed access to computers from always being local to possibly being remote. This was a double-edged sword because this functionality allowed many improvements, from electronic mail to “bulletin boards” and distributed programs. However, that event also allowed possible outsiders to connect to the computer systems from around the country and the world and try to use this newfound functionality. Computer administrators had to be much more diligent about who could “log in” to the computer and what services that they offered on this new “Internet.”
The age of the personal computer in the late 1970s and early 1980s radically changed the security model of computers. This event put computing power not only into the hands of many highly educated people to help advance the state of computing, but also those of many who were curious to see what they could do with their new machine. The concept of a commodity operating system made these computers extremely easy to run and manage, freeing the owner to program at will and discover the connected world around them. Terms like “Windows”, “UNIX”, “Apple”, and “hacker” became parts of our language.
At about this same time, universities started teaching programming for the masses. Many started offering “computer science” as a degree for those who wanted to spend their lives deep in the world of bits and bytes. Also, books, tapes and sample programs were available to those who wanted to get “hands on,” informal training with computers and the Internet. The dark art of computers – how to run them, how to program them – was being offered for those who were looking at these new machines and their untold worth as a new and exciting field of study.
Those two events changed the security model of computing – putting many more machines onto the “Internet” as well as putting many more curious (for good and bad) minds to work developing or attacking others. System administrators, both of the massive servers and of the new personal computers, had to be more diligent about the world around them. Worms, malware, “anti-virus” and firewalls were new terms that started to become prevalent during this time. The advent of the World Wide Web in the early 1990s added to this problem by putting more information and more people onto the Internet as well as the burgeoning area of e-commerce – where money transferred across the Internet for the first time. This raised the stakes about what computer systems contained and increased the diligence that system administrators and companies had to put into their computer systems.
In the years since these events, the constant speed-up of the computer systems has just made the problems worse than before – each person has thousands of times more computing power on their mobile phones than was on the first ENIAC. We now use computers much more in our lives and store terabytes, petabytes, and even exabytes (1,000 petabytes) of information on these computers, which make them and their data more interesting for those curious to see if they can get that data for their use. We also store much of our lives and personal information (PII) on these systems so that we can easily use the services and websites that we like.
But the goals of security really have not changed. The principles of authentication, access control, data integrity and the others still mean the same as they did back in the early days of computers. Architecture of the system and the operating system is still paramount in keeping the data that resides on that computer system safe, no matter the time or the place. That is why the architectures and operating systems of Unisys ClearPath Forward® are still world-class today. They were designed for the large-scale systems that have always contained sensitive data – not supercharged personal computers. It is the way that the designers, inventors, and developers of ClearPath Forward® thought and still THINK so that the security of these machines is still paramount today.
If you have not seen the 75th anniversary webcast of the ENIAC – you can find it here: https://www.vimeo.com/510236526