Peripheral management involves the operating system enabling communication between the CPU and external hardware devices, known as peripherals (e.g., mouse, keyboard, game controller). Device drivers, specific to each operating system, facilitate this communication by translating between the peripheral's language and the CPU's instructions. Updated drivers can improve hardware performance and fix bugs.
Software is the intangible parts of the computer. It includes instructions and data that are processed by the CPU and stored on Secondary storage.
There are three types of software:
System Software: The software which runs the computer hardware. It provides a platform for utility and application software to run.
Utility Software: Software used to maintain the computer and allow it to continue to running in a smooth manner.
Application Software: The most common software, these allow users to complete tasks. E.g. web browsers, office software and games.
Operating Systems are the most complex pieces of software. They manage the computer hardware and provide a platform for utility and application software.
Operating Systems carry out many purposes:
We all use a range of Operating Systems every day. Your smartphone will use Android or iOS, your laptop or desktop computer will use Windows or MacOS, and even games consoles run specific operating systems to run their hardware.
Operating systems provide a User Interface (UI) for the user. User interface allows the user to control computers using input devices and uses output devices to give feedback such as displaying images and text in a screen.
There’s three types of user interface:
A Commnd Line Interface is a more technical way of interacting with a computer. With a CLI the user types in commands using a keyboard order to control the computer.
In order to use a CLI the user must remember these commands, if they cannot remember the commands they will struggle to use the computer. It is because of this that it is much rarer to come across a computer using a CLI, and they are usually only used in specific situations or for specific applications.
However, despite CLIs being not recommended for novice users, they use a fraction of the resoruces that a Graphical User Inetrface requires.
The Graphic User Interface is the most common user interface used on computers and can be found on Desktop Computers, Laptops, Smartphones and Tablets. The user interacts with the computer using a mouse or through a touch screen, to interact with graphics, icons and menus on the screen.
They are so widespread due to their ease of use, specifically because tasks can be completed visually. This allows the user to explore the features of the computer and use trial and error to complete tasks if necessary.
Graphic User Interfaces require powerful hardware to display the graphics. Most computers with a GUI will have a Graphics Processing Unit (GPU) to drive the graphics on the display.
A menu interface is common in applications where ease of use is paramount. Menu interfaces are found in simple devices like self checkout devices, information help point and appliances.
To control a menu interface the user will navigate through a series of menu options which leads to further options, until the user finds what they need.
Sometimes a user needs to use hardware which is not built into the computer or directly connected to the CPU, these pieces of hardware are referred to as Peripherals. One function of the Operating System is to allow these devices to communicate with the CPU and interface with software.
Peripheral devices include:
These devices "speak a different langauge" to the CPU, therefore in order to allow hardware and peripherals to be controlled by the computer we need additional software called drivers. Drivers allow the CPU to communicate with the additional hardware.
Drivers are specific to different Operating Systems (a driver for Windows would not work with macOS) and sometimes even between versions of operating systems.
Some drivers get updated to allow the hardware to run more effectively and to fix bugs – this is common for graphic cards.
Some Operating Systems allow more than one user to access a computer, these Operating Systems are referred to as Multi-user Operating Systems.
Multi-user operating systems manage seperate user areas and settings for each user, this improves security and allows each user to better customise the computer to their needs.
In order to manage who is who Operating Systems can provide authentication. Authentication is the process of checking who each user is and if they should have access to the resources and data on the computer.
Authentication can be carried out in numerous ways:
If a computer is using a Multi-user Operating System, it is possible to apply User Access Levels. User Access Levels are often set-up within computer systems used by businesses. Users are categoriesed into different groups and people within each group are given certain levels of access. For example: within a school students will be given the lowest level of access to the computer network, teachers will be given a higher level of access, and network administrators will be given the highest level.
Own folders and files | All student folders and files | All teacher folders and files | |
Student | Read and Write | None | None |
Teacher | Read and Write | Read and Write | None |
Network Administrator | Read and Write | Read and Write | Read and Write |
In order to complete their job and manage the network sucessfully the network administrator will need access to all user areas, while a teacher will only need access to their own area and on occasion access to student files.
This example is over simplified to demonstrate the concept, and within a school there may be some aspects of computers and software that even network administrators will not have access to.
When assigning levels of access to the users and groups this can be managed by setting specific levels of access to files and folders
Description | |
Read | The user can open and read the file or folder. They cannot edit, or delete the file |
Write | The user can open the file or folder and make changes or even delete the file |
Execute | The user can run executable files and software (this only applies to specific files and scirpts which are executable) |
None | No access to file or folder |
A user may be given read access to a file, but not write access to protect the file from accidental deletion, or a user may not be given the execute permission to prevent them from running specific software packages.
One of the main jobs of the Operating System is managing how files are physically stored on secondary storage, for example: where and how files are placed on a magentic hard disk drive, and how these files are then displayed to the user.
Most Operating Systems visualise and display the files and directories (folders) to the user as a hieracy, with directories being able to store files and other folders with the initial folder being called the root directory. This hieracy can be displayed as a tree diagram.
When interacting with the Operating System, it will allow the users to carry out different actions on files. These include:
When an Operating System deals with storing files on secondary storage it makes use of blocks and sectors. The secondary storage device (for example: hard disk drive) is broken down into equally sized sectors and when a file is being stored it is broken down into blocks (either the same size of less than the size of a sector). The blocks are given a sequence number then written to the sectors.
The blocks that make up files are not always stored contigiously (stored together), this can be caused by gaps being left by deleted files. This can sometimes cause performance issues on older hard disk drives which benefit from files being store in a contigious or sequential manner.
The operating system keeps track of where the blocks for each file are stored and can be placed back together when needed.
When a program is ran on a computer, it is copied from secondary storage into primary memory (RAM). A program stored in memory is referred to as a process. A process can be visible on screen (such as a web browser), but may also run in the background without a user interface (such as anti-malware software, or a music player) - these are sometimes called background processes or daemons.
When computers first became available they were only capable of running one process at a time and the user would need to close the process before another one could be loaded into memory, however nowadays computers are capable of holding many processes in memory and having the CPU work on different processes seemingly simultaneously.
While a multi-core CPU can execute instructions from one processes for each core it contains, a single core CPU can only execute instructions from one process at a time. The problem is that a computer may need to be running maybe as many as 100 processes at any one time function. In order for this to work and be seemless to the end user CPU must schedule how much time each process has to access the CPU.
Through scheduling the CPU can switch between each process thousands of times per second to make it appear to the user that all processes are being executed at the exact same time.
There are different options for scheduling process time on a CPU:
Scheduler | Description | Advantages | Disadvantages |
First In First Out (FIFO) | The processes and their instructions are executed in the order they arrive at the process queue | Processes are executed until they are finished, this reduces the need for task switching (which saves time) | If there are larger processes at the front of the process queue, shorter more critical processes will not run until they get to the front of the queue |
Shortest Job First |
The process which is closest to being completed goes first. |
Shorter processes get executed quickly. Reduces the need for task switching (which saves time) |
There is a potential that some longer processes never get CPU time if shorter processes keep getting added. |
Round-robin | Each process is allocated a slice of time on the CPU, if the process cannot be completed in that time it retruns to the process queue to await being allocated more time. In round-robin a process may be assigned higher or lower priority to allow processes to have more or less CPU time. |
Processes will always get time on the CPU, regardless of when they entered the queue or the processes size. Results in a good balance between number of processes completed in a given time regardless of size. |
Task switching can cause delays as processes have to be placed back in memory and other processes have to be retrived for their slice of time If there are many processes it may take some time for them to be scheduled time on the CPU |
For the most part, computers use schedulers based on the round-robin schedule.
As more processes and data get added to memory (RAM), it is possible to run out of free space within memory. In the earlier days of computing this would have meant having to close programs in order to open new ones.
To solve this issue we created the concept of Virtual Memory (sometimes referred to as Swap). When a computer is running low on space in memory, it will move "pages" associated with some of the processes to a special reserved space in secondary storage. The CPU cannot directly access the moved pages in the reserved virtual memory on the secondary storage, so if that process is needed again it must be swapped back into memory in order to be used.
There is a loss of computing performance when using virtual memory due especially when used with a hard disk drive, however on newer SSDs the process of swapping processes between memory and the SSD should not result in a noticable drop in performance.
If a computer is useing virtual memory frequently it is recomeneded to upgrade the amount of memory installed in the computer if it is possible to do so.
Software required for maintaining the computer. Without utility software the computer may become unstable, or run “slower” over time. Some utilities can run automatically, some are manual.
Encryption is the process of using an algorithm called a cipher to scramble the data in such a way that it would be very inconvenient to unscramble. Encryption algorithms use keys (essentially a password) to uniquely encrypt data. This key is needed to decrypt the data.
Encryption software is used by a computer in two different ways. Firstly many operating systems use "full disk encryption" to protect their users files, this prevents an attacker with local access gaining access to the files on the computer. Encryption software is also used to encrypt data in transit, for example: a web browser will use encryption software to encrypt web requests before they are sent across the internet to a web server.
Compression a process that reduced the size of a file.
When we use compression software to :
There are two types of compression: Lossy & Lossless.
Traditional hard disks store data physically on a magnetic platter and this data is read by moving a magnetic head across the disk (it physically moves). The operating system attempts to store all parts of the file contigiously (all together) in adjacent sectors.
Over time files get deleted and it creates gaps of free space on the hard disk. When new files are written to the disk it may have to fit parts of files in-between others. Storing a file across different parts of a hard disk is called “fragmentation”.
When we need to open and access a fragmented file the head must physically move to retrieve all the parts of the file, this can slow down opening or accessing of a file. When a file is not fragmented, the head doesn’t need to skip over different parts of the hard disk, speeding up the processing of opening a file.
Defragmentation software attempts to reorder how files are stored on a hard disk, placing parts of fragmented files back together and storing them contigiously.
Firstly the defragmentation software will gather all of the free space on the disk together, this will give the software space to move files around the disk. The software will then attempt to move file fragments around until as many files can be stored contigiously as possible.
Once complete it will improve the speed at which those previously fragmented files can be accessed.
Defragmentation software should only be ran on magentic hard disk drives as there is no performance benefit on an SSD of storing files contigiously.
Back-up software is used to create copies of files incase the original files are lost, damaged or deleted. This process is essential in situations where data can be accidently deleted, or data could be lost to some from of cyber security issue.
There are two types of back-up:
Anti-malware software (AKA Antivirus) scans a computers files and any incoming files. The files stored in secondary storage are compared to a database of virus signatures, if the file matches a signature in the database then it is identified as a virus and can then be removed.
Anti-malware software must be kept up to date so that new viruses can be identified, however the software can also carry out heuristic analysis to identify malware that has not yet been identified. There's two types of heuristic analysis:
Many anti-malware applications will use bothsignature and heuristic analysis to provide the greatest chance of protecting a computer