Design a site like this with WordPress.com
Get started

Spring Config Server

Spring config server is one of the very important module of spring cloud to basically externalize application configuration. In a microservice architecture, spring config server provides a central place where all the microservices can look for the application configuration. It has support for multiple applications and environments.

We will look into a sample spring server application supported by git as storage for the application config. Although we can use a different form of storage but then it will require some additional changes.

Navigate to spring https://start.spring.io/ and add config server to dependencies, provide the artifact details as per the screenshot below or choose one of your own and click on generate


This will download a zip to you local filesystem. Unzip the directory and import/open the project into your favorite IDE. I am using Intellij.

The imported project will look like below.

Take a look at the pom.xml, it will have dependency for spring cloud config server apart from the spring boot starter dependency. Also it will import spring cloud dependencies using the maven import pom.

<dependency>
 <groupId>org.springframework.cloud</groupId>
 <artifactId>spring-cloud-config-server</artifactId>
</dependency>

delete the application.properties file and create application.yml instead. Set the application name and set the server port to 8001 as below

spring:
  application:
    name: my-config-server

server:
  port: 8001


let us create a git repository on our local and then migrate it on to the github.

create a directory app-config and navigate into the directory on command line and fire command “git init”

create a file inside the app-config directory. lets say myapp.yml and add a property like below

app:
 name: my-first-app

Now add this file to the git repo using git add and then git commit as below

let us add reference to this repo in our spring boot application. To do this, just open the application.yml file use the spring.cloud.config.git.url to the repository we created like below

spring:
  application:
    name: my-config-server
  cloud:
    config:
      server:
        git:
          uri: file://c://Users/Amey/git/app-config

Now to initialize our spring boot application as config server, we have to add the annotation @EnableConfigServer to our main class like below

@SpringBootApplication
@EnableConfigServer
public class MyConfigServerApplication {
	public static void main(String[] args) {
		SpringApplication.run(MyConfigServerApplication.class, args);
	}

}

Now we are ready to run the spring config server application. Right click on the class and run the application.

Now on the browser, navigate to the url http://localhost:8001/my-first-app/master&#8221; to see spring config server responding as below


Here the spring has assumed the default profile as master or assumed that the branch is the default profile.

Now lets add another file to the git repo with name “my-first-app-dev.yml” and add the following property to the config

app:
 name: my-first-app-dev

lets go back to the browser and now instead of master lets use dev as the profile in the url and check what the server responds with ( http://localhost:8001/my-first-app/dev“) .

Now our spring config server has done the magic trick. It has returned us the dev profile for our application. if you see the app.name is now my-first-app-dev and also we did not restart the server. Spring has picked up the change and responded with new profile we just added.

In the next blog, we will see various other configurations that we can use with spring config server.

Advertisement

Apache Spark – Beginner Example

Lets look at a simple spark job

We are going to look at a basic spark example. At the end of the tutorial, we will come to know

  1. A basic spark project structure
  2. Bare minimum libraries required to run a spark application
  3. How to run a spark application on local

Git Repo – https://github.com/letsblogcontent/SparkSimpleExample

Pre-requisites
1. Java
2. Maven
3. Intellij or STS (Optional but recommended)

Follow below steps to complete your first spark application

  1. Create a new maven project without any specific archetype. I am using IntelliJ editor but you may choose any other suitable editor as well. I have created a project with name “SparkExample”
    • Navigate to File-> New Project
    • Select Maven from Left Panel
    • Do not select any archetype
    • Click on “Next”
    • Name the project “SparkExample”
    • Click on “Finish”
      This should create a new maven project like below
      project structure
  2. Next we update the pom.xml with spark-core dependency as below.
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>org.example</groupId>
    <artifactId>SparkExamples</artifactId>
    <version>1.0-SNAPSHOT</version>
    <properties>
        <maven.compiler.source>1.8</maven.compiler.source>
        <maven.compiler.target>1.8</maven.compiler.target>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.10</artifactId>
            <version>2.0.0</version>
        </dependency>

    </dependencies>

</project>

3. Now we create a new Class “WordCount” in “com.examples” package and copy below contents.

package com.examples;


import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;

import java.util.Arrays;
import java.util.Map;

public class WordCount {

    public static void main(String[] args) throws Exception {

        SparkConf conf = new SparkConf().setAppName("wordCounts").setMaster("local[3]");
        JavaSparkContext sc = new JavaSparkContext(conf);

        JavaRDD<String> lines = sc.textFile("in/word_count.txt");
        JavaRDD<String> words = lines.flatMap(line -> Arrays.asList(line.split(" ")).iterator());

        Map<String, Long> wordCounts = words.countByValue();

        for (Map.Entry<String, Long> entry : wordCounts.entrySet()) {
            System.out.println(entry.getKey() + " : " + entry.getValue());
        }
    }
}

4. create a new directory folder “in” at the project root and add below file into the “in” directory. In this example we are going to read this file and use spark to count the occurrence of each word in the file.

5. Now we have to build our project to see our output. Since we are using maven, we can run the “mvn clean install” from the command prompt or we can use the rebuild from the intellij. both works and once that is done we can run our application. So basically we have to run the WordCount class so right click on the class and run “WordCount.main()”

6. This should fire up a standalone spark application and run our job of “WordCount”. This job basically counts for the occurrences of words in the file “word_count.txt”. The output should look like below

Twenties, : 1
 II : 2
 industries. : 1
 economy : 1
  : 7
 ties : 2
 buildings : 1
 for : 3
 eleventh : 1
 ultimately : 1
 support : 1
 channels : 1
 Thereafter, : 1
 subsequent : 1
.....
..

7. Now that we have successfully ran the program, lets learn what really happened
The below code configures the name of our spark application and we set the master to be local, which basically tells spark to run this application locally and run it on 3 cores.

SparkConf conf = new SparkConf().setAppName("wordCounts").setMaster("local[3]");

8. Initializes the spark context

 JavaSparkContext sc = new JavaSparkContext(conf);

9. The below code reads the file and converts it into what is called as Resilient Distributed Dataset (RDD) . This will distribute our file into 3 cores to be processed further and returns us with a single reference to the RDD for manipulation

JavaRDD<String> lines = sc.textFile("in/word_count.txt");

10. Rest of the code is self explanatory. The RDD api provides with certain apis like the one we have used. The countByValue as name suggest counts the occurrence of values in our text file and then when we print the values from the map, we get a consolidated view of the aggregation.

So as can be seen, writing a spark application is really easy and its only with a single class we can start writing a spark application. Please comment if you have faced any issue following this tutorial or like if you would like to see more.

Django + Postgres

Until now we have used the SQLite database for all of our tutorials. Lets see how to configure a different database in our django projects.

Pre-requisite – Postgres database installed on your machine.

Git reference project – https://github.com/letsblogcontent/Python/tree/master/django_postgres

Step 1 – If not already installed, install django

pip install django

Step 2 – Create a new django project “django-postgres”

django-admin startproject django-postgres

Step 3 – Navigate to project directory “django-postgres” to Install django restframework

cd django-postgres
pip install djangorestframework


Step 4 – Install psycopg2 – Adapter for postgres

pip install psycopg2

Step 5 – now lets create a new app

python manage.py startapp models


Step 6 – Add the app to the installed apps of your project in the file settings.py. You will find it under “<root directory>/django-postgres”

'models.apps.ModelsConfig'
settings.py

Step 7 – Update the databases section in the settings.py file of the project to have postgres database details. Name represents database name and other parameters are self explanatory. Update all the details.


DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql_psycopg2',
        'NAME': 'Test2',
        'USER': 'user1',
        'PASSWORD': 'user1',
        'HOST': 'localhost',
        'PORT': '',
    }
}

Step 8 – Lets create a model in the models app so that we can test the database configuration. In the models app directory update below files. Create files where they are not already present.

<root_dir>/django_postgres/models/models.py

from django.db import models


class Company(models.Model):
    name = models.CharField(max_length=50)
    address = models.CharField(max_length=100)
    email = models.CharField(max_length=50)

Here we have create a simple model class named Company

Create a new file serializers.py file in the models app
<root_dir>/django_postgres/models/serializers.py

from rest_framework import serializers
from .models import  Company


class CompanySerializer(serializers.ModelSerializer):
    class Meta:
        model = Company
        fields = ['id', 'name', 'address', 'email']

in the views.py file of models app, add following content. Here we are using viewsets to configure our views for company

<root_dir>/django_postgres/models/views.py


from rest_framework import generics, viewsets
from .models import Company
from .serializers import CompanySerializer


class CompanyViewSet(viewsets.ModelViewSet):
    queryset = Company.objects.all()
    serializer_class = CompanySerializer

Create/Update urls.py file to create url mapping of company views

<root_dir>/django_postgres/models/urls.py

from django.urls import path, include
from . import views
from rest_framework.routers import DefaultRouter

router = DefaultRouter()
router.register('company', views.CompanyViewSet)

urlpatterns = [
    path('', include(router.urls))
]

update the urls.py file of the project django_postgres. Add a new entry as below in the urlpatterns section

 path('api/', include('models.urls')),



Step 9 – We are pretty much done and now is the time to test our project. I have updated the project in the git url mentioned at the start of the page. If something is not working, please do write in comments or try and download the project and execute.

Execute “makemigrations” and “migrate” command as below

python manage.py makemigrations
python manage.py migrate


Now run the server with below command

python manage.py runserver


Step 10 – Now on your browser execute http://localhost:8000/api/company

Now try and add some new elements and see the results. You can also go to postgres database and you should find models_company table created with the entries you have created.