This article is split in 2 parts and it aims to show how to create a small application (or a microservice if you prefer) using Akka HTTP (Scala) and Redis database.The application is very focused and provides means for a customer to be added, removed and retrieved.
In terms of technologies/tools/libs, the following ones will be used:
- Akka HTTP/Akka HTTP Json Support/Akka HTTP Test kit
- Redis database
In this first part, we’ll focus on the layer responsible for communicating with Redis, while the second part will focus on Akka HTTP.
Continue reading “Building a service using Akka HTTP and Redis – Part 1 of 2”
In this article we will create a simple but comprehensive Scala application responsible for reading and processing a CSV file in order to extract information out of it.
Although simple, this app will touch in the following points:
- Creating an application from scratch using SBT
- Usage of traits, case class and a few collection methods
- Usage of scalatest framework to write unit tests
Continue reading “Reading and processing a CSV file with Scala”
It’s not rare the scenario where you end up needing to run some code during the initialization of a given class or of an application. This article will present a few options in how to achieve this task.
Run code when the class is loaded
In this scenario, you want to run some code when a given class is loaded by the JRE, but you want that code to be executed only once, regardless of the number of instances of that class. For this, you can use a static initialization block.
Continue reading “Running code at application/class startup”
CDI (Context and dependency injection) is a great specification introduced in the JEE6 that offers a lot of resources. One of them is the Producer methods, and we’ll be talking about it in this post.
CDI Producer can be used in some scenarios, such as:
- Making Non-CDI beans eligible to be injected into other CDI beans. You may be using some library that doesn’t expose their beans via CDI but you want to inject them in your CDI beans by convenience.
- You have a bean that requires a constructor with some argument.
Continue reading “CDI Producer”
I have recently published a course called “Build an application from scratch: JEE 7, Java 8 and Wildfly” on Udemy. This course is all about using a lot of recent and important Java technologies and best practices of software development in order to create a complete enterprise application. Some of the technologies and tools covered are:
- Java EE 7: JPA 2.1, Bean Validation 1.1, JMS 2.0, EJB 3.2, CDI 1.1, JAX-RS 2.0, security.
- Java 8: Lambda expressions, Date and Time API, streams and more.
- Libraries such as Gson, JUnit, Mockito and Hamcrest.
- Arquillian for integration tests.
- Wildfly 8 (former JBoss) as Application Server.
- PostgreSQL for production and HSQLDB/H2 for unit and integration tests.
- Eclipse IDE (this is a Maven project, so you can use other IDE).
- Postman Chrome extension to test all our REST endpoints.
If you are interested and want to know more details, you can check the course by clicking here. In this page, you can find more details about the course, watch some free videos.
If you have any questions, just let me know.
In this post we will see a brief introduction about the Spock test framework and how to integrate it with Maven and Eclipse.
Spock is a framework used to write tests using the Groovy language and its adoption has been increasing lately. Although Groovy is the language you must use to write your tests with Spcok, it can be used for both Groovy and Java applications and is compatible with most of the Java IDEs, including Eclipse.
The idea of writing the tests using Groovy can be seen as an obstacle for some Java developers that are not familiar with Groovy. However, the knowledge required in Groovy is quite basic and the Spock documentation itself is very helpful with this question. I myself, right now, know very little about Groovy and this hasn’t been an issue to write tests with Spock.
Continue reading “Integrating Spock with Maven/Eclipse”
Continuing with the subject presented in the first part of this article, let’s talk a little more about asynchronous processing with ExecutorServices. In this second part, let’s approach the following use case.
Our software will receive and process a batch of transactions. After processing this batch, the system must create a “summary” of the processing, pointing individually the number of processed transactions successfully and with error. This summary could be sent to some other system, for example, that could then analyze the success rate of this transactions batch or something like that.
Continue reading “Asynchronous processing with ExecutorServices – Part 2”