miércoles, 29 de febrero de 2012

[DPS Class] OpenVPN



"A virtual private network (VPN) is a network that uses primarily public telecommunication infrastructure, such as the Internet, to provide remote offices or traveling users an access to a central organizational network.

VPNs typically require remote users of the network to be authenticated, and often secure data with encryption technologies to prevent disclosure of private information to unauthorized parties.

VPNs may serve any network functionality that is found on any network, such as sharing of data and access to network resources, printers, databases, websites, etc. A VPN user typically experiences the central network in a manner that is identical to being connected directly to the central network. VPN technology via the public Internet has replaced the need to requisition and maintain expensive dedicated leased-line telecommunication circuits once typical in wide-area network installations.

Virtual private network technology reduces costs because it does not need physical leased lines to connect remote users to an Intranet."


For more information about VPN, visit VPN (Wikipedia)

OpenVPN



"OpenVPN is a software based connectivity solution, uses SSL (Secure Sockets Layer) and Virtual Private Network VPN (virtual private network) technology.
OpenVPN offers point-to-point connectivity with hierarchical validation of remotely connected users and host, is a very good option for Wi-Fi environments (wireless networks IEE 802.11) and supports a wide settings, including load balancing and more. It is released under the GPL, free software.

No other solution offers such a mix of enterprise-level security, safety, ease of use and rich features.
OpenVPM simplified the configuration of VPN's reducing the difficult to configure other solutions such as IPsec and making it more accessible for people inexperienced in this type of technology."


For more information, visit OpenVPN Site

Installing and configuring OpenVPN on Ubuntu 10.04 LTS 32Bits


NOTE: I don't include screenshots of the installation because I had some problems during the process, however, I will put the correct commands that if you execute it correctly, you will have no problems. Also, all of the following instructions, unless otherwise indicated, must be executed in the VPN server.


First, the installation of the packages:
suda apt-get install openvpn openssl
Our working directory will be /etc/openvpn/
cd /etc/openvpn

Now that the openvpn package is installed, the certificates for the VPN server need to be created.
First, copy the easy-rsa directory to /etc/openvpn. This will ensure that any changes to the scripts will not be lost when the package is updated. You will also need to adjust permissions in the easy-rsa directory to allow the current user permission to create files.
sudo mkdir /etc/openvpn/easy-rsa/
sudo cp -r /usr/share/doc/openvpn/examples/easy-rsa/2.0/* /etc/openvpn/easy-rsa/
sudo chown -R $USER /etc/openvpn/easy-rsa/

Next, edit /etc/openvpn/easy-rsa/vars adjusting the values to your environment, this is my configuration:
export KEY_COUNTRY="MX"
export KEY_PROVINCE="NL"
export KEY_CITY="Monterrey"
export KEY_ORG="JuanCarlos"
export KEY_EMAIL="your_email@example.com"

Now, enter the following commands in order to create the server certificates:
cd /etc/openvpn/easy-rsa/
source vars
./clean-all
./build-dh
./pkitool --initca
./pkitool --server server
cd keys
openvpn --genkey --secret ta.key
sudo cp server.crt server.key ca.crt dh1024.pem ta.key /etc/openvpn/

Now, enter the following commands in order to create the client certificates, replace CLIENTNAME with the hostname of the client, to know which it are, run hostname in a terminal of a client:
cd /etc/openvpn/easy-rsa/
source vars
./pkitool CLIENTNAME

Now configure the openvpn server by creating /etc/openvpn/server.conf from the example file:
sudo cp /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz /etc/openvpn/
sudo gzip -d /etc/openvpn/server.conf.gz
Backup /etc/openvpn/server.conf:
sudo mv /etc/openvpn/server.conf /etc/openvpn/server.conf.bak
Create a new /etc/openvpn/server.conf with the following options to:
dev tun
proto tcp
port 1194
ca /etc/openvpn/easy-rsa/keys/ca.crt
cert /etc/openvpn/easy-rsa/keys/server.crt
key /etc/openvpn/easy-rsa/keys/server.key
dh /etc/openvpn/easy-rsa/keys/dh1024.pem
user nobody
group nogroup
server 10.8.0.0 255.255.255.0
persist-key
persist-tun
status openvpn-status.log
#verb 3
client-to-client
push "redirect-gateway def1"
#log-append /var/log/openvpn
#comp-lzo 

After configuring the server, restart openvpn by entering:
sudo /etc/init.d/openvpn restart

Configuring the Clients


First, the installation of the packages:
suda apt-get install openvpn openssl
Also, our working directory will be /etc/openvpn/
cd /etc/openvpn
Make some directories:
sudo mkdir /etc/openvpn/easy-rsa/
sudo mkdir /etc/openvpn/easy-rsa/keys

From the server, copy the following files to the client, and place them in the appropriate folder:
/etc/openvpn/ca.crt
/etc/openvpn/easy-rsa/keys/CLIENTNAME.crt
/etc/openvpn/easy-rsa/keys/CLIENTNAME.key
/etc/openvpn/ta.key
Where CLIENTNAME.crt and CLIENTNAME.key are the certificates created in the server before.
Then with the server configured and the client certificates copied to the /etc/openvpn/ directory, create a client configuration file by copying the example. In a terminal on the client machine enter:
sudo cp /usr/share/doc/openvpn/examples/sample-config-files/client.conf /etc/openvpn
Backup /etc/openvpn/client.conf:
sudo mv /etc/openvpn/client.conf /etc/openvpn/client.conf.bak
Create a new /etc/openvpn/client.conf with the following options to, in the line remote 123.456.789.000 1194 , replace 123.456.789.000 with the public IP of your server or the hostname of your server:
dev tun
client
proto tcp
remote 123.456.789.000 1194
resolv-retry infinite
nobind
user nobody
group nogroup
# Try to preserve some state across restarts.
persist-key
persist-tun
ca /etc/openvpn/ca.crt
cert /etc/openvpn/easy-rsa/keys/CLIENTNAME.crt
key /etc/openvpn/easy-rsa/keys/CLIENTNAME.key
comp-lzo
# Set log file verbosity.
verb 3 
Where CLIENTNAME.crt and CLIENTNAME.key are the certificates created in the server before and which already must be copied to the client.

Finally, restart openvpn:
sudo /etc/init.d/openvpn restart
Now you should now be able to connect to the remote LAN through the VPN. If you run ifconfig command, you should see a new tunneled interface, something like this:



If you look, a new IP address is assigned to the client, that is a private IP address of our VPN.

Also, you can test the network with some pings :)




UPDATE (11/mar/2012) Configuring and Connecting 2 remote VPN


Firts, follow all the previous steps to build a fully functional local VPN.

We do our cluster, a client of another one. For this, in the remote VPN server, open a terminal and make the corresponding certificates for the client, replace CLIENTNAME with the hostname of the local VPN server, to know which it are, run hostname in a terminal of a the local VPN server:
cd /etc/openvpn/easy-rsa/
source vars
./pkitool CLIENTNAME

Back in our local VPN, copy the following files from the remote VPN server to the local VPN server, and place them in the appropriate folder:
/etc/openvpn/ca.crt
/etc/openvpn/easy-rsa/keys/CLIENTNAME.crt
/etc/openvpn/easy-rsa/keys/CLIENTNAME.key
/etc/openvpn/ta.key
Remember CLIENTNAME.crt and CLIENTNAME.key are the certificates created in the server before.
Then create a the client configuration file by copying the example. In a terminal of the local VPN server enter:
sudo cp /usr/share/doc/openvpn/examples/sample-config-files/client.conf /etc/openvpn
Backup /etc/openvpn/client.conf:
sudo mv /etc/openvpn/client.conf /etc/openvpn/client.conf.bak
Create a new /etc/openvpn/client.conf with the following options to, in the line remote 123.456.789.000 1194 , replace 123.456.789.000 with the domain of the remote VPN server, is highly recommended to use a service like dyndns:
dev tun
client
proto tcp
remote remote.vpn.domain 1194
resolv-retry infinite
nobind
user nobody
group nogroup
# Try to preserve some state across restarts.
persist-key
persist-tun
ca /etc/openvpn/ca.crt
cert /etc/openvpn/easy-rsa/keys/CLIENTNAME.crt
key /etc/openvpn/easy-rsa/keys/CLIENTNAME.key
comp-lzo
# Set log file verbosity.
verb 3 
Where CLIENTNAME.crt and CLIENTNAME.key are the certificates created in the server before and which already must be copied to the client.

Finally, restart openvpn in the local VPN server:
sudo /etc/init.d/openvpn restart
Now you should be able to connect to the remote LAN through the VPN, also, you can see how start the client and server daemon in the local VPN server:

[IMAGE]

You must have the tunneled interface like before, and now, if you run the comand route, you should be able to see the IP range of your VPN and also the IP range of the remote VPN.

[IMAGE] https://help.ubuntu.com/10.04/serverguide/openvpn.html

[DPS Lab] How to open ports on router

This is a guide where I explain how to open ports on a router, in this case, a router Thompson TG585v7



First, we need to login to our modem. Open a web browser and type the following address:

192.168.1.254

We are going to see the following screen:



In the fields, type the following data:

User name: TELMEX
Password: Your WEP KEY. (check the labels of your router)

If the information is correct we will see the main configuration screen:



Then, in the sidebar we will select the option "Herramientas" ("tools"), we will see this screen:



In Herramientas (tools) screen, we are going to select the option "Comparticion de juegos y aplicaciones" (Games and Applications Sharing). We will enter to this section:



Here is where we have control over ports, we can open, close and assign to any device.
To do this, at the bottom of the screen we will select the option "Crear neuvo juego o aplicacion" ("Create new game or application").
We will see this screen:



Here we will enter a name for our new game or application. In the options below we will select the one that says "Entrada manual de mapas de puertos" ("Manual Entry of Port Maps") and we click Next.

On the next screen:



we are going to define the range of ports, which can be as wide as we need it. In this case I just need to open only one port the 1194, so my interval is from 1194 to 1194.
The other options are left with the default values ​​that have, we click on "Agregar" ("Add"), the screen will refresh and we will see a table that was not previously with the newly opened ports.



Next, we just need to assign that port to a device, basically, this is a type of redirection, because if a request comes to our router with a specific port, the request will be redirected to the device that is assigned that port.
For this we go to the options at the bottom of the modem setup screen and select the option "Asignar un juego o aplicacion a un dispositivo de red local" ("Assign a game or application to a local network device").
Then, in the next screen:



in the table, we will choose the game which we are going to assign, and the device which we are going to assign a game.
Once we have specified these data, we will click the button "Agregar" ("add").



And that is all, we have opened a port and also have set a condition to redirect traffic to the computer or device that will -serve the request.

miércoles, 22 de febrero de 2012

[DPS Class] Contributions WEEK 4

We might made this some day...

For this week, I was researching with my partner Rafael Lopez about Beowulf cluster and MPI systems. So we made two blog entries where we share the gathered information.




Also, we implement the John The Ripper application, you saw the live execution the past tuesday.

For the next week, we are beggining to research about Parallel CUDA, so Rafael Lopez and me we are working in the construction of a GPU cluster for the next week, some examples and maybe a live execution.

NOMINATIONS:

  • Rafael Lopez, because he help me a lot to understand some cluster concepts and configurations.
  • Emmanuel, because he improve the entry "MPI Samples" with an extra example.

That's all.

jueves, 16 de febrero de 2012

[DPS Class] Contributions WEEK 3

In the picture, CUDA summary of my computer :)


For this week, I extend my previous entry for the Wiki. I add a new method to install CUDA because all the other tutorials of my partners won't work for me, I think this will be helpful because the problems that I had with the installation are common in the internet portals.
LINK: GPU Computing with CUDA | Installation Section

Also, Rafael Lopez and me we are working in the construction of a cluster using John The Ripper Architecture. This are under developement

This week, I`m going to nominate Gaby Garcia because she made some compilations of examples in CUDA , the next step after my installation tutorial.


References

miércoles, 8 de febrero de 2012

[DPS Class] Contributions WEEK 2



As a contribution for this week, I wrote some information about Compute Unified Device Architecture (CUDA) in the Wiki, with some images and a short description in each one.

Link: http://elisa.dyndns-web.com/progra/CUDA


Also, I wrote a little entry about Supercomputing in Mexico.

Link:


Also, I do the translation of the content of the Cluster Team page to english. :D

That's all for this week. :)

[DPS Lab] Supercomputing in Mexico

Since 2003, Mexico has been in the TOP500, this classification list 2 times a year the ranking of the 500 most powerful computers in the world.

In first we have the supercomputer "K Computer", K comes from the japanese word "Kei" which means 10 quadrillion.

Characteristics
  • Location: RIKEN Advanced Institute for Computational Science (Kobe, Japan)
  • Manufacturer: Fujitsu
  • Date of Operation: June 2011 (Fully Operational November 2011)
  • Operating System: Linux
Technical Specifications
  • 88.128 2.0GHz 8-core SPARC64 processors VIIIfx packed in 864 cabinets (705.024 cores)
  • 1410048 GB of memory
  • 12659.89 kW of Power Consumption
  • Water Cooling System
Taked from http://i.top500.org/system/177232


In the center of the country was created the LANCAD project (Laboratorio Nacional de Computo de Alto Desempeño [National Laboratory of High Performance Computing]) with the contribution of the Universidad Autónoma Metropolitana, Instituto Politécnico Nacional (CINVESTAV), and Universidad Nacional Autónoma de México.

Each of these 3 institutions host a cluster of computation:

UNAM: Kan Balam

The UNAM host the supercomputer Kan Balam, it was developed by HP and has 1368 AMD 2.6 GHz Opteron Core processors, 3016 Gigabytes of RAM and a 160 terabytes storage, stored in 19 racks, which together, they use an area of 15 to 20 meters square.
KanBalam comunications operates through a high speed network with Infiniband technology at 10 gigabits per second.
Kan Balam has processing capacity to perform 7,113 trillion arithmetic operations per second and is capable of delivering this capability up to 350 users. The computer can increase their capacity in the future if necessary since this consists of servers, each with 4 processors.



UAM: Aitzaloa

The Aitzaloa cluster consists in 3 main parts.

High Performance Computing Node (HPC)
  • Number of Nodes: 270 (135 Supermicro's Twin) nodes
  • Processors: Quad-Core Intel Xeon 3.0 Ghz with 1600 MHz Front Side Bus
  • Number of Processors: Quad-Core 540 (2160 CORES)
  • Memory: 16GB RAM per node (4320 Total Distributed)
  • Computing Capacity: 18.4 TFlops.
  • Communications: Infiniband
  • Operating System: Linux Centos 5.2
Storage System
  • Comprising: 4 servers HP-Proliant DL380G5
  • Storage System: 100TB Luste
  • Hard Drives: 150 1TB drives in Raid 1 and Raid 6
  • Communications: Infiniband and Gigabit Ethernet
Master node
  • Processors: 2 Intel Xeon 2.8GHz Quad-Core
  • Front Side Bus: 1600 MHz
  • Memory: 32GB RAM
  • Communications: Infiniband and Gigabit Ethernet
  • Operating System: Linux
  • Distribution: Centos 5.2
  • Local Storage: 9TB

The companies that contributed to this project are HP and Sun Microsystems



IPN (CINVESTAV): Xiuhcoatl

In late January, CINVESTAV present Xiuhcoatl supercomputer, which in Nahuatl means "fire serpent"

It has 3.480 Intel and AMD processors with a capacity of 24.97 teraflops and 7,200 GB of RAM.
It has a storage capacity of 45.350 GB hard disk and reaching peak computing capacity would require between 70 and 80 kilowatt hours of power consumption.
This includes 170 servers supercomputer capable of performing 18 trillion mathematical calculations per second.
At U.S. cluster hybrid type, which integrates Intel and AMD processors, as well as graphics processing units contecnología GPGPU (General-Purpose Computation on Graphics Processing Units).




The "grid" has a capacity of about 50 Teraflops.

Xiuhcoatl, Kan Balam and Aitzaloa totaling 7 000 cores, its are interconnected by a fiber optic network from the Metro IPN to University, to UAM.

Some planned projects for this cluster range from modeling of proteins that cause Alzheimer's disease (interaction between the atoms), simulations of earth's climate, tsunamis and the formation of stars

The country is moving in the field of Supercomputers, however, the center are getting all the credit. Will the Northern students able to contribute to the supercomputing?

References

miércoles, 1 de febrero de 2012

[DPS Lab] Computing Cluster and Parallel Computing

Taked from http://www.phys.ntu.edu.tw



In simple words, a computing cluster is a group of computers that work together, distributing tasks and sharing hardware and software. By these methods the computing capabilities can grow significantly.
Each computer runs its own operating system, and all are interconnected via a local area network (LAN).





Types of Clusters:

Based on its characteristics, the clusters are classified into 3 types:

  • 1. High Availability Clusters
  HA Clusters are designed to ensure constant access to service applications. The clusters are designed to maintain redundant nodes that can act as backup systems in the event of failure. The minimum number of nodes in a HA cluster is two – one active and one redundant – though most HA clusters will use considerably more nodes. 
HA clusters aim to solve the problems that arise from mainframe failure in an enterprise. Rather than lose all access to IT systems, HA clusters ensure 24/7 access to computational power. This feature is especially important in business, where data processing is usually time-sensitive. 

  • 2. Load-balancing Clusters
  Load-balancing clusters operate by routing all work through one or more load-balancing front-end nodes, which then distribute the workload efficiently between the remaining active nodes. Load-balancing clusters are extremely useful for those working with limited IT budgets. Devoting a few nodes to managing the workflow of a cluster ensures that limited processing power can be optimized. 

  • 3. High-performance Clusters
  HPC clusters are designed to exploit the parallel processing power of multiple nodes. They are most commonly used to perform functions that require nodes to communicate as they perform their tasks – for instance, when calculation results from one node will affect future results from another. 


Cluster Benefits

Some of the benefits of cluster computing are:

Reduced Cost:  The price of off-the-shelf consumer desktops has plummeted in recent years, and this drop in price has corresponded with a vast increase in their processing power and performance. The average desktop PC today is many times more powerful than the first mainframe computers. 

Processing Power:  The parallel processing power of a high-performance cluster can, in many cases, prove more cost effective than a mainframe with similar power. This reduced price per unit of power enables enterprises to get a greater ROI from their IT budget. 

Improved Network Technology: Driving the development of computer clusters has been a vast improvement in the technology related to networking, along with a reduction in the price of such technology. 

Scalability: Perhaps the greatest advantage of computer clusters is the scalability they offer. While mainframe computers have a fixed processing capacity, computer clusters can be easily expanded as requirements change by adding additional nodes to the network. 

Availability: When a mainframe computer fails, the entire system fails. However, if a node in a computer cluster fails, its operations can be simply transferred to another node within the cluster, ensuring that there is no interruption in service. 



Parallel Computing

Parallel computing is based on the concept of "divide and conquer", divide a task into smaller parts and execute them simultaneously. When each of these small parts ends, generates a part of the final result, when all tasks complete the sub-results are merged to make the final result.

You need to have multi-core computers to carry out this type of calculation, because each core is responsible for carrying out one of the subtasks.
Our world is highly parallelized, take for example an anthill, to keep alive the colony is a common task, and each of the individuals in the colony do their part to complete.

A computer instructions commonly performed in series. When a algorithm is parallelizable it's usually divided into small works called threads, then we apply some synchronization tasks to execute the tasks at the time and order (eg, read or modify a variable). Some synchronization methods are:
  • Locks
  • Condition variables
  • Semaphores

These methods prevent the execution of a critical part of the code at the same time by two or more threads causing an error in the execution or errors in the final result.

Parallelism Levels

  • Bit: Increases the processor word size. Increasing the word size reduces the number of instructions the processor must execute in order to perform an operation on variables whose sizes are greater than the length of the word. (For example, consider a case where an 8-bit processor must add two 16-bit integers.
  • Instruction: Is a measure of how many of the operations in a computer program can be performed simultaneously.
  • Data: Focuses on distributing the data across different parallel computing nodes. Is achieved when each processor performs the same task on different pieces of distributed data
  • Task: Focuses on distributing execution processes (threads) across different parallel computing nodes.

Classes of parallel computers

Parallel computers can be roughly classified according to the level at which the hardware supports parallelism. This classification is broadly analogous to the distance between basic computing nodes. These are not mutually exclusive; for example, clusters of symmetric multiprocessors are relatively common.
  • Multicore computing
  • Symmetric multiprocessing
  • Distributed computing
    • Cluster computing: Interconnected computers.
    • Grid computing: Interconnected clusters.

Referencias:

[DPS Class] Monitoring the efficiency of a computing cluster



This week I did a research of various systems that can help us to monitor the efficiency of our cluster and the available resources.

Because these are monitoring systems, the contribution is in the software section.

The link to this contribution is Monitoring the efficiency of a computing cluster

Where you can find more details and links for various applications.

Greetings:).