In simple words, a computing cluster is a group of computers that work together, distributing tasks and sharing hardware and software. By these methods the computing capabilities can grow significantly.
Each computer runs its own operating system, and all are interconnected via a local area network (LAN).
Types of Clusters:Based on its characteristics, the clusters are classified into 3 types:
- 1. High Availability Clusters
HA clusters aim to solve the problems that arise from mainframe failure in an enterprise. Rather than lose all access to IT systems, HA clusters ensure 24/7 access to computational power. This feature is especially important in business, where data processing is usually time-sensitive.
- 2. Load-balancing Clusters
- 3. High-performance Clusters
Cluster BenefitsSome of the benefits of cluster computing are:
Reduced Cost: The price of off-the-shelf consumer desktops has plummeted in recent years, and this drop in price has corresponded with a vast increase in their processing power and performance. The average desktop PC today is many times more powerful than the first mainframe computers.
Processing Power: The parallel processing power of a high-performance cluster can, in many cases, prove more cost effective than a mainframe with similar power. This reduced price per unit of power enables enterprises to get a greater ROI from their IT budget.
Improved Network Technology: Driving the development of computer clusters has been a vast improvement in the technology related to networking, along with a reduction in the price of such technology.
Scalability: Perhaps the greatest advantage of computer clusters is the scalability they offer. While mainframe computers have a fixed processing capacity, computer clusters can be easily expanded as requirements change by adding additional nodes to the network.
Availability: When a mainframe computer fails, the entire system fails. However, if a node in a computer cluster fails, its operations can be simply transferred to another node within the cluster, ensuring that there is no interruption in service.
Parallel ComputingParallel computing is based on the concept of "divide and conquer", divide a task into smaller parts and execute them simultaneously. When each of these small parts ends, generates a part of the final result, when all tasks complete the sub-results are merged to make the final result.
You need to have multi-core computers to carry out this type of calculation, because each core is responsible for carrying out one of the subtasks.
Our world is highly parallelized, take for example an anthill, to keep alive the colony is a common task, and each of the individuals in the colony do their part to complete.
A computer instructions commonly performed in series. When a algorithm is parallelizable it's usually divided into small works called threads, then we apply some synchronization tasks to execute the tasks at the time and order (eg, read or modify a variable). Some synchronization methods are:
- Condition variables
These methods prevent the execution of a critical part of the code at the same time by two or more threads causing an error in the execution or errors in the final result.
- Bit: Increases the processor word size. Increasing the word size reduces the number of instructions the processor must execute in order to perform an operation on variables whose sizes are greater than the length of the word. (For example, consider a case where an 8-bit processor must add two 16-bit integers.
- Instruction: Is a measure of how many of the operations in a computer program can be performed simultaneously.
- Data: Focuses on distributing the data across different parallel computing nodes. Is achieved when each processor performs the same task on different pieces of distributed data
- Task: Focuses on distributing execution processes (threads) across different parallel computing nodes.
Classes of parallel computersParallel computers can be roughly classified according to the level at which the hardware supports parallelism. This classification is broadly analogous to the distance between basic computing nodes. These are not mutually exclusive; for example, clusters of symmetric multiprocessors are relatively common.
- Multicore computing
- Symmetric multiprocessing
- Distributed computing
- Cluster computing: Interconnected computers.
- Grid computing: Interconnected clusters.