Tuesday 2 May 2023

Neural Network from Scratch Using C (Part-3)

 

XOR Problem

In part 2, AND gate and OR gate could be trained using a single perceptron. But XOR gate could not be as it is not linearly separable. The XOR function can be represented as \begin{align} y &= x_{1}\bar{x_{2}}+\bar{x_{1}}x_{2} \\ z_{1} &= x_{1}\bar{x_{2}} \\ z_{2} &= \bar{x_{1}}x_{2} \\ y &= z_{1}+z_{2} \end{align} Looking at the above equations \begin{align} z_{1} &= x_{1} \land \bar{x_{2}} \\ z_{2} &= \bar{x_{1}} \land x_{2} \\ y &= z_{1} \lor z_{2} \end{align} y is splited into 2 equation z1 and z2, 2 linearly separable functions hence it can not be realized using a single neuron. Then the solution will be a multiple neurons. The organization of multiple neurons stacked in different layers is known as "Multi Layer Perceptron". In this part we will see a very basic multi layer perceptron to solve XOR problem. This chapter involves calculus differential. For Information purpose basic things will be discussed but no in-depth calculus will be used. It is assumed the reader is well aware of derivatives.

Multi layer Perceptron

A Multi layer perceptron (MLP) is a network of neurons arranged in more than one layer. That is Output, Hidden and Input. Generally Input Layer is a direct connecting layer. Hidden layer is where the real magic happens and the output layer is interface to user. A hidden layer is called as such because user doesn't interact with that layer. In an MLP multiple hidden layers may be present. But there is a limitation which is due to "Vanishing Gradient Problem". If interested then There are enough material present on internet. In an MLP with one input layer, one hidden layer and one output layer, and the network is fully connected that is each neuron of a layer is connected each neuron of immediate nearby layer. Please See the picture Bellow.

One thing to observe here is Hidden layer. Each the neurons in this layer has connections from each neuron in Input layer, and they are connected to output neurons. This is a simple fully connected neural network with 4 inputs, 5 hidden and one output neuron. In Future multiple output networks will be discussed and used.

The connections between layers are weights. These weights are generally represented using matrices. for convenience we will represent each weight as wij where i is ith neuron of next layer and j is jth neuron of current layer. In above case weight matrix between input and hidden layer should be defined as a matrix Wih[5,4]=> 5 rows and 4 columns and the weight matrix between hidden and output is Who[1,5]. For Input, hidden output and final output the matrix will be a column matrix.

Forward Propagation

Input data moves through the network and creates some output. This is called forward propagation. Previously in single neuron, the calculation was easier. Here in case of multiple layers it is a bit involved. It involves matrix multiplication. Consider above Image.input arriving at ith neuron of hidden layer can be expressed as: \begin{equation} h_{i} = w_{i0}x_{0}+w_{i1}x_{1}+w_{i2}x_{2}+w_{i3}x_{3} \end{equation} or in general terms it can be expressed as \begin{equation} h_{i} = \sum_{j=0}^{n}w_{ij}x_{j} \end{equation} If set of equation for hi is observed closely then it is seen that it is result of matrix multiplication. $$ \begin{bmatrix} h_{0} \\ h_{1} \\ h_{2} \\ h_{3} \\ h_{4} \end{bmatrix} = \begin{bmatrix} w_{00}^{ih}&w_{01}^{ih}&w_{02}^{ih}&w_{03}^{ih} \\ w_{10}^{ih}&w_{11}^{ih}&w_{12}^{ih}&w_{13}^{ih} \\ w_{20}^{ih}&w_{21}^{ih}&w_{22}^{ih}&w_{23}^{ih} \\ w_{30}^{ih}&w_{31}^{ih}&w_{32}^{ih}&w_{33}^{ih} \\ w_{40}^{ih}&w_{41}^{ih}&w_{42}^{ih}&w_{43}^{ih} \end{bmatrix} \times \begin{bmatrix} x_{0} \\ x_{1} \\ x_{2} \\ x_{3} \end{bmatrix} $$ $$H = W^{ih}.X$$

This resultant matrix need to be applied activation function of hidden layer to produce output. Thus generated values is input to the out put layer(in our case output neuron). As previous layer we need to do a matrix multiplication to find the input to the output layer and apply activation function of output layer to get the network output. $$ \begin{bmatrix} O_{0} \end{bmatrix} = \begin{bmatrix} w_{0}^{ho}&w_{1}^{ho}&w_{2}^{ho}&w_{3}^{ho}&w_{4}^{ho} \end{bmatrix} \times \begin{bmatrix} h_{0} \\ h_{1} \\ h_{2} \\ h_{3} \\ h_{4} \end{bmatrix} $$ $$ O = W^{ho}.H $$

Back Propagation

The Output O is dependent on weights Who and H. H is dependent on weights Wih. As the selection of initial weights are random, hence they are unlikely to be perfect. Hence the output O is different than expected or target. The difference of Target and Output is known as Error. This error has to be compensated by adjusting weights Who and Wih. The adjustment of weights starts from last layer and goes down to beginning. Hence it is called Back Propagation.

Back propagation of errors has to be adjusted among different layers, depending on how much each layer contributed. To find out contribution of error a partial derivative of error has to be found out with respect to different contributing elements. Description and derivation of this is beyond the scope as it will lengthen the article. For reference one can watch this video

We can represent the updated weight matrix of a layer as follows $$ W_{new}=W_{cur}+\alpha[target-actual output].X$$ where 

  • X is input to the layer 
  • Ξ± is learning rate

 

Implementation

To adjust weights in Who>, we need to find error in output, then apply derivative of activation function to it. Next multiply by scalar learning rate(Ξ±). This finish scalar calculations. The resultant multiplied by the Input to the layer that is activated output of hidden layer will give us delta weights. this delta weights added to the weight matrix Woh gives updated weights in Hidden to Output layer.

Same procedure have to be repeated for input to hidden layer.  To begin with we need to know error at hidden layer first. This can be found by multiplying (Woh)T with error at output. Rest procedures are same as above.


	// find error in output
	matrix err_out=subtract_matrix(expect,final_out);
	matrix whoT = transpose(nn.who);
	matrix err_hid=mat_mult(whoT,err_out);
	
	// calculate delta errors 
	matrix_map(&final_out,sigmoid_prime);
	mat_mult_hadamard(&err_out,&final_out);
	matrix hidT= transpose(hidden_out);
	matrix delta_who = mat_mult(err_out,hidT);
	mat_mult_scalar(nn.lr,&delta_who);
	
	matrix_map(&hidden_out,sigmoid_prime);
	mat_mult_hadamard(&err_hid,&hidden_out);
	matrix inpT = transpose(inp);
	matrix delta_wih = mat_mult(err_hid,inpT);
	mat_mult_scalar(nn.lr,&delta_wih);
	// update weights
	matrix tempa =add_matrix(nn.who,delta_who);
	assign_matrix(&nn.who,&tempa);
	matrix tempb = add_matrix(nn.wih,delta_wih);
	assign_matrix(&nn.wih,&tempb);

With this we are ready to make a very small neuralnet and train it for XOR problem.  MNIST dataset can also be trained but needs patience and time( I achieved with 91.6% accuracy). Iris Dataset and wheat seed data set are very easily trained. Bellow is XOR problem implementation.


#include <stdio .h >
#include "../mlp1.c"
int main(int argc, char *argv[])
{
	srand(0);
	nnet my_nn =create_nn(2,3,1);

	// training
	
	for(int n = 0; n< 30000; n++)
	{
		matrix inp = create_matrix(2,1);
		matrix expected = create_matrix(1,1);
		
		set_cell(&inp,0,0,1.0);
		set_cell(&inp,1,0,0.0);
		set_cell(&expected,0,0,1);
		train(my_nn,inp,expected);
		
		set_cell(&inp,0,0,1.0);
		set_cell(&inp,1,0,1.0);
		set_cell(&expected,0,0,0);
		train(my_nn,inp,expected);
		
		set_cell(&inp,0,0,0.0);
		set_cell(&inp,1,0,1.0);
		set_cell(&expected,0,0,1);
		train(my_nn,inp,expected);
		
		set_cell(&inp,0,0,0.0);
		set_cell(&inp,1,0,0.0);
		set_cell(&expected,0,0,0);
		train(my_nn,inp,expected);
		
		delete_matrix(&expected);	
		delete_matrix(&inp);
	}
	// save the trained netork
	save_nn(my_nn,"xor2.net");

	// load nn from file
	my_nn = load_nn("xor2.net");
	// test trained network
	matrix inp = create_matrix(2,1);
	set_cell(&inp,0,0,1.0);
	set_cell(&inp,1,0,1.0);
	matrix out = predict(my_nn,inp);
	display_matrix(out);
	delete_matrix(&out);
	set_cell(&inp,0,0,0.0);
	set_cell(&inp,1,0,0.0);
	out = predict(my_nn,inp);
	display_matrix(out);
	delete_matrix(&out);
	set_cell(&inp,0,0,0.0);
	set_cell(&inp,1,0,1.0);
	out=predict(my_nn,inp);
	display_matrix(out);
	delete_matrix(&out);
	set_cell(&inp,0,0,1.0);
	set_cell(&inp,1,0,0.0);
	out = predict(my_nn,inp);
	display_matrix(out);
	
	delete_matrix(&out);
	delete_nn(my_nn);
	delete_matrix(&inp);
	return 0;
}
=======output====
[ashok@fedora xor]$ ./xor
[ 0.00559855 ]

[ 0.06528170 ]

[ 0.96165163 ]

[ 0.96165793 ]

Though I may put all the code here or on github, but I would like the reader to implement his own version analogous to this implementation. In any problem Contact me for the Full code its free But before asking please try to implement yourself.

Till now we discussed

  • What is an artificial neuron
  • How to implement it
  • What are the areas it can be used(linear separable)
  • How to train it
  • What is XOR problem
  • What is Multi layer neural network
  • How to implement one
  • How to train
  • Example of XOR implementation
This is the end of 3 parts for simple neural network. Part1 and Part2 can be found here. In some future article we will discuss Convolutional Neural Network(CNN)
 

Sunday 30 April 2023

Neural Network from Scratch Using C (Part-2)

 

Implementing a Perceptron

In this part we will start implementing a single perceptron and do some experiments. 

first part can be found here.

A single Perceptron

A single Perceptron(neuron) can do a lot of jobs. Yes Like a Neuron in Our nervous system it can do a lot of things. As One grows from Infants to toddler the neurons in our brain get trained(learned) to coordinate between our limbs, coordinate our activities etc...
Similarly the Perceptrons can be trained to classify data as per the training data. but we can only classify linearly separable data.

What is a linearly separable?
Linear separability can be termed as, if the plot of data can be separated by a single straight line. This means depending on the separation criterion, there exists a straight line dividing the data points in to 2 regions.

Now lets see a small code implementing it step by step. NOTE: We need a matrix manipulation library for all our Neural network adventure. For this purpose my tmatlib on github is quite suitable. Though one should not expect it to perform like blas. In my code i used a similar version known as smatlib. As it is continuously growing I have not published it on github. But stay assure the functionality and performance of both are same. 

// structure of our perceptron.

typedef struct perceptron
{
	int n_inp;   // number inputs
	matrix W;    // weight matrix
	double lr;   // learning rate
	double bias; // bias
}perceptron;

A straight line on a 2-dimentional plain can be expressed as y=mx+c. The Bias parameter in our perceptron is related to c. Our training data is x and y, x is input and y is expected or target. Our perceptron learns the value of m. Learning rate(lr) determines how fast or slow the perceptron should learn. If Learning rate is too low then it may be trapped in local minima. If it is too high then it may oscillate and may not reach the solution. So Bottom line is one has to experiment with learning rate(lr) value to achieve better result. Weight matrix is initialized with random values. Gradually with training it is adjusted/updated to achieve better result. 

 I uploaded all the codes at Simple-NeuralNetwork.

In the repo there are 4 files perceptron.h, linearsep.c, and_gate.c and or_gate.c. We will discuss bits from those files and see how they work.

In perceptron.h file, there are functions/methods to create, manipulate, train, display and delete the perceptron. General sequence is 

  • create a perceptron
  • prepare data set
  • train
  • test with predict function

This is simple, is not it? Now we will look into 2 functions, namely predict and train. These  functions are the core of this perceptron.

predict:

double predict(perceptron p, double inp[])
{
	double res=0.0;
	
	for(int r = 0; r < p.n_inp; r++)
	{
		res += inp[r]*get_cell(p.W,0,r);
	}
	res += p.bias;
	double ret= sigmoid(res);
	return(ret);
} 

This function multiplies inputs with corresponding weights and adds them. Next, to activate the output apply sigmoid function. A sigmoid function ranges from 0 to 1. Please see the first part here where the sum process and sigmoid function is defined.

Train:

  void train(perceptron *p,double *data,double desired)
  {
	double guess = predict(*p,data);
	
	double err = desired - guess;
	for(int n = 0; nn_inp; n++)
	{
		double cur_w = get_cell(p->W,0,n);
		set_cell(&p->W,0,n,cur_w+p->lr*err*data[n]);
	}
	p->bias += p->lr *err;
  } 
 

In this function, perceptron is trained by adjusting weights and bias.The steps can be as follows.

  • calculate output
  • find error by subtracting output from target
  • find delta weight by multiplying error, data(input) and learning rate
  • find delta bias by multiplying error and learning rate
  • update weights
  • update bias

In step 3 we are calculating the delta weights. This is the most important step in whole code. How and why delta value is multiplication of error with input is a topic itself.  As an Hint it is gradient, hence a derivative of error term with inputs and weights are to be considered.

Testing

To tryout this implementation, 3 code files were provided. which adheres to the general sequence. First one is linear_sep.c, In this demo program, random data sets are created and the perceptron is trained; then with a known data set it is compared. the output on my system is  as bellow.


************Perceptron:**************

number of inputs = 2
Weights:
[ 1.00000000 ]
[ 1.00000000 ]

Bias = 1.000000
*************************************
************Perceptron:**************

number of inputs = 2
Weights:
[ 61.70302487 ]
[ -31.13024636 ]

Bias = 0.622900
*************************************

is (2.000000 < 2 x 20.000000 + 1) predicted = 1.000000

For other 2 demo codes. Can this artificial neuron mimic reliably a digital gate? Well that was the motive when artificial neuron was proposed. It is observed that AND and OR gate are possible to simulate but not XOR gate. WHY???  from bellow diagrams it is obvious that OR function and AND function are linear separable but not the XOR function.


// OR gate implementation.


    // x1 x2 y
    //  0  0 0
    //  0  1 1
    //  1  0 1
    //  1  1 1
    //
    //  0,1-------1,1
    //   |          |
    //  \|          |
    //   |\         |
    //   | \        |
    //  0,0-\-----1,0
    //       \-separating line
    
    // AND gate implementation.
	// x1 x2 y
	//  0  0 0
	//  0  1 0
	//  1  0 0
	//  1  1 1
	//
	//  0,1---\----1,1
	//   |      \    |
	//   |        \  |
	//   |          \|
	//   |           |\-line separating  
	//  0,0-------1,0
// XOR function
// x1  x2  y
//  0   0  0
//  0   1  1
//  1   0  1
//  1   1  0
//        /----------|----- 2 lines separates
//  0,1--/-----1,1   |
//   |  /       |    |
//   | /        |    |
//   |/         |/---|
//   /          /
//  /|         /|
//  0,0-------/-1,0
    

Output of OR gate 

************Perceptron:**************

number of inputs = 2
Weights:
[ 0.00000000 ]
[ 0.00000000 ]

Bias = 1.000000
*************************************
************Perceptron:**************

number of inputs = 2
Weights:
[ 11.48282021 ]
[ 11.48242093 ]

Bias = -5.281243
*************************************
inputs 0, 0 predicted = 0.005060
inputs 1, 0 predicted = 0.997978
inputs 0, 1 predicted = 0.997977
inputs 1, 1 predicted = 1.000000

 

Output of AND gate

************Perceptron:**************

number of inputs = 2
Weights:
[ 0.00000000 ]
[ 0.00000000 ]

Bias = 1.000000
*************************************
************Perceptron:**************

number of inputs = 2
Weights:
[ 10.22930534 ]
[ 10.22993658 ]

Bias = -15.512550
*************************************
inputs 0, 0 predicted = 0.000000
inputs 1, 0 predicted = 0.005050
inputs 0, 1 predicted = 0.005053
inputs 1, 1 predicted = 0.992943

In the above result if we consider 0.99 as 1 and anything less that 0.005 as 0 then our results are at par with the truth table.

As we saw above XOR is not linearly separable, we can not simulate it with a single perceptron. we need more than one layer to simulate it we will do it in a future post.

Till then happy coding.

Saturday 29 April 2023

Neural Network from Scratch using C

Hello There, 

Around a year ago we talked. Now several things have changed. I recently joined as a PhD scholar πŸ˜ƒ. And decided to soil my hands in AI/ML. During my M.Tech days(2011-2013), we are taught Soft computing. Though I did my Thesis on Network On Chip (NOC) which is related to putting different IPs on to a silicon substrate. I devised a Genetic Algorithm based best mapping  as well as a deterministic heuristic based mapping using graphs. both the codes are available on my github page. But I have never paid much attention to ML till 2016, when I was developing a traffic detection  application for a client. thought it was never went beyond PoC stage.

Now I thought to put all my knowledge to earn a degree (Yes a Doctorate). To refresh my understandings of my master's degree time, I decided to go step by step implementing building blocks to learn this topic.

We should start by putting a single neuron first. The artificial neurons, the counter part of natural neurons share similar characteristic.

Perceptrons - the most basic form of a neural network · Applied Go 

 Dendrites are inputs and axon is output. But one thing we must put here which is Activation. It is the strength we provide as output.

in case of perceptrons input are applied through some weights. This means different input channel have different weights. the perceptron sums the weighted inputs and apply some sort of activation before output the result.

So mathematically we can write 

output = activation(sum ( inputs x weights)) 

output = activation(sum ( input array * weights array))

we can say we need 2 one dimensional matrices for inputs and weights.  Then upon the result of the multiplication we apply activation function.

Lets assume we have 3 inputs i1, i2, i3 with weights w1,w2 and w3. Now output will be \begin{equation} z=\sum_{1}^{n} i_mw_m \end{equation} next activation of z is output. 

There are several activation functions used to activate the output of perceptron. the most used are Binary activation, ReLU(0,x) and Sigmoid(0,1). 

We will use sigmoid for our application. \begin{equation}\sigma(z)=\frac{1}{1+e^{-z}}\end{equation}

 Next we will see the simple implementation of this perceptron.


Bye Bye for now.

Wednesday 10 August 2022

GitHub Update

 After a Long time another post.

Added 2 small projects, one is a single header Matrix library and another is a CPU Simulator.

The matrix library (tmatlib) is capable enough to solve simultaneous linear equation. Though it was written several months back but I was Lazy.

The CPU simulation code was my tech-itch :p it is very minimal and for fun. It has small set of instructions, it has only 8 bit wide registers and a very small memory only 1kb which includes stack, data and code.

my github link here.

Saturday 10 July 2021

Laravel Application deployment on Fedora 34 server

 Recently I have to migrate an old Laravel application of a client to a newer OS release Fedora 34. When it was developed Fedora-29 was Latest 😁.

 

The procedure is as bellow.

  • Install Fedora 34
  • Update the installation (sudo dnf update -y)  
  • Install Apache (sudo dnf install httpd)
  • Install Mariadb, Mariadb server (sudo dnf install mariadb mariadb-server)
  • Install PHP, PHP Mbstring, PHP zip, PHP Mysqlnd, PHP Mcrypt, PHP XML, PHP Json, PHP suMod ( sudo dnf install composer php php-mbstring php-zip php-mysqlnd php-mcrypt php-xml php-json mod_suphp -y)
    • check if PHP is properly working or not by using following script
phpinfo.php
<?php
phpinfo();
?> 
this script should be in /var/www/html/ directory which is by default  document root for httpd server.
Point the browser to this file and see if it shows a page about PHP version and other details. If not then check if apache server is running. 
  • As super user create a laravel.conf file under /etc/httpd/conf.d directory with following content
<VirtualHost *:80>
    ServerName your server name
    DocumentRoot /var/www/html/laravel_project/public
    <Directory /var/www/html/laravel_project>
        AllowOverride All
    </Directory>
</VirtualHost>
  •  Copy the laravel project to /var/www/html/ as super user and cgange the owner by chown -R apache:apache laravel_project 
  • Start Mariadb (sudo systemctl start mariadb)
This is almost complete. For your specific cases different services or tweaking may be required. 

Laravel is a good framework for web-app development but for sometimes it is not getting more acceptance. Now the buzz word is Node . In a future article I may write on settingup and using Express framework for Web-App.

Thank you.

Saturday 26 December 2020

Deploying Django application on Fedora 33

Deploying a "Hello, World" Django app on Fedora 33
Requirement
1) Fedora server
2) NGINX web server
3) Django application framework
4) Gunicorn

A "Hello, World" application using Django

Django framework based upon Python. Current Python version is 3.7.
First check if python is installed. If not install it.

$ sudo dnf install python3


once Python is installed using pip3 django should be installed

$ pip3 install django~=3.1.2


Any version of django is good but it is personal preference.

next Gunicorn should be installed.

$ pip3 install gunicorn


We are almost ready for Django application development.

Soiling your hand
To start our development, we need to create a directory and put all our requirement there

$ mkdir myapp && cd myapp


I choose myapp but any thing is ok

$ django-admin startproject mapp


mapp is project name. Django will create some files for us and lessen our job. It will create
mapp directory and few files

.
└── mapp
    ├── manage.py
    └── mapp
        ├── asgi.py
        ├── __init__.py
        ├── settings.py
        ├── urls.py
        └── wsgi.py

Next we create a small application to run on top of it.

$ cd mapp
$ django-admin startapp hello


.
├── hello
│   ├── admin.py
│   ├── apps.py
│   ├── __init__.py
│   ├── migrations
│   │   └── __init__.py
│   ├── models.py
│   ├── tests.py
│   └── views.py
├── manage.py
└── mapp
    ├── asgi.py
    ├── __init__.py
    ├── settings.py
    ├── urls.py
    └── wsgi.py

now open the mapp/settings.py

$ vi mapp/settings.py


add following lines

INSTALLED_APPS = [
    ...
    ...
    'hello',
]



next open views.py
$ vi hello/views.py


...
...
from django.http import HttpResponse
def homePageView(request):
    return HttpResponse('Hello, World!')



create a file urls.py under hello and open it

$ vi hello/urls.py


add the following code to it.


from django.urls import path
from .views import homePageView
urlpatterns = [
    path('', homePageView, name='home')
]



Update mapp/urls.py as bellow


from django.contrib import admin
from django.urls import path, include
urlpatterns = [
    path('admin/', admin.site.urls),
    path('', include('hello.urls')),
]



Now test our app by running

$ python3 manage.py runserver


In browser, open http://127.0.0.1:8000
and enjoy your app

Ok you might have seen some warnings here. So please don't worry. Execute

$ python3 manage.py migrate


some messages will be displayed and now again execute

$ python3 manage.py runserver


This time there are no warnings, right? Ok good...

Deployment part

Open settings.py

$ vi mapp/settings.py


and change following

...
ALLOWED_HOSTS = [ '*' ]
...
...
STATIC_ROOT = '/var/www/static'
MEDIA_ROOT = '/var/www/media'


and make sure you created those directories

$ sudo mkdir /var/www/static
$ sudo mkdir /var/www/media


Next run

$ sudo python3 manage.py collectstatic


we are almost done.

Running guincorn

First copy mapp/wsgi.py as mapp/mapp.wsgi

$ cp mapp/wsgi.py mapp/mapp.wsgi
$ sudo gunicorn mapp.wsgi --workers=5 --bind=127.0.0.1:8888


now gunicorn is running. you can check with http://127.0.0.1:8888 and it will show you hello world page.

But we need nginx to serve for us on internet so we have to tell nginx to serve for us.

use following code as a .conf file in /etc/nginx/conf.d/ directory
we assume it as myapp.conf
server {
    # the port your site will be served on
    listen      80;
    # the domain name it will serve for
    server_name 192.168.1.3;   # substitute by your FQDN and machine's IP address
    charset     utf-8;

    #Max upload size
    client_max_body_size 75M;   # adjust to taste

    # Django static files. You should run 'django manage.py collectstatic'
    location /static  {
        alias /var/www/static;      # your Django project's media files
    }
    location /media {
    alias /var/www/media;
    }
    # Finally, send all non-media requests to the Django server.
    location / {
        proxy_pass http://127.0.0.1:8888;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}



restart nginx server by using

$ sudo systemctl restart nginx


now check from a remote machine using http://IP.address.of.server/

do you get it properly??? if not then check firewall settings on server. next check access on files etc...


goodluck.

Friday 24 July 2020

Fedora 32 on Google Cloud



Google doesn't provide a Fedora image for its cloud platform neither Fedora provides one. but this wont stop us. It is a nice learning and adventure for orthodox people like me.

Those who have started there career in Computer in late 80's and early 90's can understand my pain 😬

Well during this time of pandemic COVID-19 I thought of taking an adventure ride into the world of Virtualization. Though Virtualization is not new to me, as during my days at RedHat I did some virtualization stuff by porting drivers, but that is a very short lived assignment for me. Even I was fascinated by Vmware's ESX server Community edition. Even to tryout my toy Operating System NanoOS i need some platform to try out so Was using qemu and BOCHS.
But setting up a VM in a cloud is just a new thing for me. If you like to learn along and how to do it then read along.

Prerequisites 

As I don't know much about Windowz, will restrict to Google's cloud console as much as possible and at some times Linux desktop.
  • Google account
  • Machine with Fedora Desktop(any Linux distro will do but I like Fedora)
  • Google SDK for Linux installed on your machine.
Deploying a VM needs availability of an Image. Though there is no image for Fedora 32 but similar images are available for CentOS and RHEL. We could choose any but I would prefer CentOS over RHEL.

Plan of Action 
  • We will create a VM using CentOS image. This will be a very minimum one as we will use it for creating Fedora 32 image.
  • Download Fedora cloud image from Fedora download site
  • Tweek Fedora image to make it suitable for Google.
  • Upload the image
  • Create an instance of it 
Creating a VM 
Here you need a google account your Gmail Account wont work here. You need to have cloud permission in your account. Please refer to google site for procedure. its not difficult but its not relevant here hence omitted.

Similarly install Google cloud SDK (gcloud and gsutil). Information about the procedure is available on Google SDK site.
  • Login to Google cloud, the bellow screen will be displayed
  • Click on menu (3 horizontal line)

  • from drop down select Compute Engine

  • Next it will display a page with all vm instances if available. for us it should be empty. Next on the top there is an icon with a plus


  • Now a form will be displayed, it will ask to enter a name for your VM instance . We will put boiler-plate there. Dont worry about other options except Boot Disk.

  • On Boot Disk option by default it will be debian. There is a button change next to it. Clicking it will display several options.


  • On clicking change something like this will appear. We will select CentOS. On next screen it will ask which version to use. we will use CentOS 8 as it has dnf command. But If you prefere yum can choose other older versions too.





  • Allow it for some time say 5 Minutes. then on VM Instance page our new VM will be displayed.


Ok we have successfully created a VM on Google Cloud now we can enjoy it for some timeπŸ™† .Relax get a coffee and think what we have achieved.
After this point our journey will be some what tough as we will be doing some sysadmin job (yes SysAdmins are really great people They do a lot of hectic work to ease our life).

Click on ssh button at the end of the line of our newly created VM instance. It will open a newbrowser window with shell prompt of our VM. Check if all your commands are accepted or not.

Download Fedora cloud image

As we know there is no Google compatible image for Fedora available but Cloud images are available. We can download it and adjust  for Google compatibility. 
On the ssh terminal shell prompt check if wget is available or not. Else install it by using dnf or yum.

~]$ sudo dnf install wget

So go ahead and download  Fedora Cloud image. Choose raw image for convenience. Remember we are downloading into VM on google not onto our Desktop.

~]$ wget -c"https://download.fedoraproject.org/pub/fedora/linux/releases/32/Cloud/x86_64/images/Fedora-Cloud-Base-32-1.6.x86_64.raw.xz"

it will be downloaded to the home folder in VM. Give it some time as it is a long file , but don't worry it will be downloaded in few minutes say 2 to 3 minutes.

Preparing the image
Now the raw file is downloaded as *.tar.xz format. It has to be extracted  first 

~]$ tar -xvf Fedora-Cloud-Base-32-1.6.x86_64.raw.xz

It is better to copy this file to a smaller name like disk.raw and work on that.
  • mount the image

~]$ sudo losetup  /dev/loop0 disk.raw
~]$ kpartx -a /dev/loop0

mount the disk
$ sudo mkdir /mnt/disk
$ sudo mount /dev/mapper/loop0p1 /mnt/disk


  • Bind mount some filesytem

~]$ sudo mount --bind /dev /mnt/disk/dev
~]$ sudo mount --bind /sys /mnt/disk/sys
~]$ sudo mount --bind /proc /mnt/disk/proc

besides this another file has to be created that is /etc/resolv.conf content of that file is very small and as follows.

search domain.name
nameserver 8.8.8.8

Its now time to go into chroot
$ sudo chroot /mnt/disk
After chrooting check if ping works, that is if network works else adjust your network to work in chroot environment.

First thing to do is update the image. by
~]$ sudo dnf update

Next we have to install 
~]$ sudo dnf -y remove cloud-init
~]$ sudo dnf -y install google-compute-engine-tools
~]$ sudo dnf clean all

Now we are almost done. Google need some service to be enabled.
~]$ sudo systemctl enable google-accounts-daemon google-clock-skew-daemon \
    google-instance-setup google-network-daemon \
    google-shutdown-scripts google-startup-scripts

Well our Image is almost ready. Now we should leave chroot environment unbind and unmount the filesystems we have mounted earlier.

~]$ sudo umount /mnt/disk/dev /mnt/disk/sys /mnt/disk/proc
~]$ sudo umount /mnt/disk
~]$ sudo losetup -d /dev/loop0

In above steps, unmounting disk will throw "resource busy error" dont worry. Give it some time to finish writing stuffs, 5 to 10 minutes maximum. then unmount the disk and delete the loop0. We are done.
Google need its image to be in *.tar.gz format so we will compres the image in tar.gz format.

~]$ tar cvzf fedora-32-google-cloud.tar.gz disk.raw

Now we will upload the image just created "fedora-32-google-cloud.tar.gz"

We have created in side Fedora VM. We have to put it into Google cloud bucket. Using Web console
  • Goto Storage
  • Create a bucket
remeber its name. Now from within VM we can copy this image to bucket.
~]$ gsutil cp "fedora-32-google-cloud.tar.gz" "gs://[name of the bucket]/"
Replace with your bucket name in above comand. Well it will be uploaded to the bucket in a while. It can be seen listed under the bucket created.

From your desktop fedora fire this command.
~]$ gcloud compute images create --source-uri \
    gs://[your bucket]/fedora-32-google-cloud.tar.gz \
    --licenses "https://compute.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx" \
    fedora-32-google-cloud-image
Wait for few seconds, an image will be created in images section of console. it can be seen under Images section of Web console. Note the --licenses calause. it is optional. you can omit it. but if you omit it then the created image wont be able to run virtual machines under it. Yes you read it correctly a Nested VM.

I think you enjoied your second coffee ... Jocking...

Now the image is ready and we can create an instance out of it as we created our boiler-plate only difference will be in case of Boot Image we will choose our image and we will choose a multi CPU configuration. After the instance is created check it using SSH and enjoy your Multi CPU Fedora Server on Google cloud.

Well This is enough for Now. Please contact me if you need any help.

73's
DE VU3VFU
Ashok.

Sources :
1) major io
2) linuxmint
3) media com

Neural Network from Scratch Using C (Part-3)

  XOR Problem In part 2, AND gate and OR gate could be trained using a single perceptron. But XOR gate could not be as it is not linearly...