DevOPs Introduction (part3)

Hey there, nice to see you back for this third part of our DevOps tutorial series! 😃

Alright so today the roadmap will be a bit longer than usual so hang tight and don’t hesitate to make breaks between the different parties if necessary 😉

Tutorial roadmap:

  • Creating a new PostgreSQL user and our database

  • Creating our tables using python scripts

  • Performing the initial data input of our tables with python

Part 1: Creation of our Postgresql database

Alright so first thing first we need to create a new PostgreSQL database in which we will develop our data architecture with our different tables. So let’s do just that by ssh into our existing DO droplet.

From there given the default installation of PostgreSQL in Ubuntu we have to switch from the root user to the postgres user, using the following command in order to be able to access our PostgreSQL instance.

Alright so now let’s create our dedicated postgres user that we’ll use to access remotely our database. To do so use the following command and fill up the prompt as shown below:

Note: Ideally it would be best to avoid creating our new user as a superuser given that we’ll use it later to connect our database to Grafana. However given that it is just an introductory tutorial here we’ll consider that it is good enough from a security standpoint given the non-existent security risks.

To check that your new user has indeed been created you can use the Postgres Shell as follows:

Type psql in your terminal to access the Postgres shell as shown below

From there, type the following command:

$ \du

From there you should be meet with a table of the following format inside of which you’ll see your new user.

Alright so from there exit the postgres shell, using the following command

$ \q

Then still using the postgres user type the following command in order to create a new database called “ethdb” that we’ll assign to our newly created user.

$ createdb -O myuser ethdb

Alright and that’s it for our first part we now have our database and a dedicated user to access it, so make a quick break if needed before jumping to part 2.

Part 2: Creation and initialisation of our metrics tables using python scripts.

Alright so now let’s create the tables that we’ll use to store the network metrics that we want.

⚠️ ⚠️ Don’t forget to come back to the root user before continuing by using the following command:

Alright so from there use the following command to list all the available folders at the current level where we at:

$ ls

Normally you should see, the following:

So from there use the following command in order to get one level higher up in the folder architecture of the server:

$ cd ..

There if you use the previous “ls” command you should get the following result:

Alright so now that we’re at the right level let’s start by first creating the folder in which we’ll store our python script. To do so, use the following mkdir (make directory) command:

$ mkdir tutorials
$ mkdir tutorials/introduction 
$ mkdir tutorials/introduction/code 

Now let’s enter our introduction folder:

$ cd tutorials/introduction

From there let’s create two additional text documents :

  • A ReadMe.txt file in which we’ll present an overview of our project

  • A Requirements.txt listing all the packages and version requirements to run this project

To create those two files we’ll use here the “touch” command as follow:

$ touch ReadMe.txt 
$ touch Requirements.txt

Normally after all that you should end up with an “introduction” folder looking like the below snippet

Alright so now that we’re all set from a project architecture standpoint, let’s start creating our PostgreSQL tables.

Here given the limitation of the Santiment API provider enumerated in our first tutorial it appears better to create the four following tables:

  • historical_metrics: Storing all historical prices and marketcap up to today

  • network growth: Storing all historical network growth values up to last month

  • daily_metrics: Storing daily value for the metrics where we only want to show in our dashboard the latest current value

  • delayed_metrics: Same as daily metrics table but for metrics where we only have access to past month values

In order to create this data architecture, let’s get back to our terminal and enter into our code folder.

Note: Depending on where you are inside your folder architecture the command might changed so keep in mind that to enter inside a folder at a particular level you only need to use the following command:

$ cd [name of the folder you want to enter]

and if you want to go back of one level just used the following command:

$ cd ..

Alright so once you’re into the code folder, go ahead and create a new python file by using the following the following command:

$ nano init.py

From there you’ll end up in the nano text editor which is with Vim one of the most common text editor used in Linux.

From there go to the github page of the tutorial and copy and paste the content of the init.py file inside the nano shell.

Let’s unpacked a little bit the code that we used here.

So first we import the psychopg2 python package that will enable us to connect to our Postgres instance through Python.

Then we define the create_tables() function composed of two blocks through which we’ll create each of our tables.

  • Block 1: We create a command variable composed of usual SQL commands in order to create each of our tables with their respective metrics.

  • Block 2: We use a try - except block in order to try to connect to our local PostgreSQL instance and create a cursor that we’ll then use to execute the content of our command variable.

Finally, we run our function in order to create our data architecture in our PostgreSQL database.

Alright so now that we now what the code is doing let’s run it in our distant server. To do so first save all modifications made in the nano text editor by pressing ctrl + s and then exit by pressing ctrl + x.

Once back into the code folder install the psychopg2 package with the following command:

$ pip install psychopg2

Then from there we have to make our script executable. To do so we’ll use the following command:

$ chmod + x init.py

Normally if everything went well you should see the name of your init.py file appearing in green. From there, just type the following command to run your python script in your remote server:

$ ./init.py 

Alright so here normally if everything worked you should not receive any message once the script has finished running. However, it’s always better to double check so reusing the same method used before we can easily check if we indeed created those tables using the following sequence of commands:

$ su - postgres 
$ psql 
#We enter here into the Postgres shell 
postgres=# \c ethdb 
ethdb=# \dt

Note:

  • The “\c ethdb” command is a psql command allowing to connect to the ethdb database

  • The “\dt” command is a psql shortcut enabling to list all available tables

  • Here the “user1” mentioned comes from the fact that I used another user name tag when creating this tables

Alright so now that we have our table let’s go ahead and populate them.

Part 3: Initial data populating of our psql tables

As before for the creation of our psql tables, we’re also going to use python scripts. However, this time we’re going to have to make API calls to Santiment API through reusing the functions that we previously created in our first tutorial.

As such before creating our init.py script let’s create before a new python script called functions.py using the following command inside the code folder:

$ nano functions.py 

And there just copy and paste the content of the function file that you can find here on the tutorial Github repo.

Also don’t forget to make it executable by using as before the following command:

$ chmod +x functions.py

Alright so now let’s move onto our data populating script per se

Script 1 : historical_metric init

Let’s start with our historical_metrics table. First we create a dedicate python script that we’ll call table_init.py:

$ nano table_init.py 

As before, copy and paste here in nano the content of the table_init file available on Github.

Let’s unpack a little bit our code here given that a lot is going on at the same time 😅

So first as you can see in the code we import the functions that we need from our functions.py file, as well as the psychopg2 package allowing us to connect to our local psql instance and other python package enabling us to get access to date variables.

Second we create the update_db() function enabling us to populate our historical_metrics table for each date.

Third we create the populate_table() function which calls our “get_historical_price” and '“get_historical_marketcap” to get all the data point for the past 6 months and use our previous update_db() function on each of those data point to populate our psql table.

Note: As you can see below we also need to a bit of formating on our data given that the data delivered by Santiment API in python is not compatible with our integer psql type.

Alright so now let’s run this python script. So as per usual make it executable and run it with the following sequence of commands:

$ chmod +x table_init.py
$ ./table_init.py

If everything goes smoothly at the end of the process we’ll be met with the following prompt message:

If you want to double check the code execution, run the following sequence of commands:

$ su - postgres
$ psql 
postgres=# \c ethdb
ethdb=# select * from historical_metrics; 

Note: To exit that view press q

Script 2 : network_growth init

Alright so here using the same process, previously employed in the case of table_init.py we create a new python_script to populate our network_growth table called table_init1.py and copy and paste the table_init1.py file from Github:

$ nano table_init1.py 

The code employed here is pretty much the same as the one used before in the case of table_init slightly modified to get the network growth value and take into account the fact that we can only get thet data as far as last month.

From there as per usual we make that file executable and run it.

Script 3: daily_metrics init

Alright so from there we’ll go a bit quicker in our explanation given that we’ll again use the same process.

So create here a file called table_init2.py, copy and paste the content from the corresponding Github file into nano and from there make the script executable and run it.

Script 4: delayed_metrics init

Create a file called table_init3.py, copy and paste the content from the corresponding Github file into nano, make the script executable and run it.

And that’s it!! 🥳🥳

We finally have a full data architecture into our DO server, with a database and a set of tables storing the value of our different metrics so congrats if you came as far as this, it was definitley not easy so give yourself a well deserved pat on the back 😃

Next time we’ll build onto our work to see how to create scripts that populate automatically our tables on a daily basis and also how to connect our psql database to grafana in order to create our ethereum dashboard, so stay tuned for the next and final part of this tutorial serie and in the meantime happy coding!

See you next time 🖖

Subscribe to CyberGen Lab
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.