Spring boot batch provides the ability to read data from a database and writes it to a CSV file or write to database using jpa. CSV files are the most convenient way to move data from one application to another. The source app will read data from the database and write it to a CSV file. In the target application, the CSV file is uploaded. In this article, we’ll be using the MySQL database. Spring boot batch provides the ability to read data from a MySQL database and writes it to a CSV file. In this post, we’ll look at how to use spring boot batch to read data from a database and write it to a csv file.

The data is stored in a table in the MySQL Database. Spring boot batch will read and process data from the table. The processed data will be saved in the form of a CSV file. The JPA repository is used to read data from the MySQL database. The CVS (comma separated values) file contains the table data in each line as a comma separated string. The spring boot batch will write the table data in each line of the CSV file as a comma separated string. The FlatFileItemWriter class helps in the writing of table data to a CSV file.

Database SQL Query

The employee table is created in MySQL Database. The table contains three columns id, name and salary. Three sample data is inserted in the table using sql insert statement. Spring boot batch will read data from the employee table and creates a cvs file and adds the employee details as a comma separated string.

create table employee (
id int primary key,
name varchar(100),
salary int

insert into employee values (1,'name01',1000);
insert into employee values (2,'name02',2000);
insert into employee values (3,'name03',3000);

Data Model class

In this example, the java bean data model class is created. The employee class has fields for id, name, and salary. The Employee class will be assigned the data from the example csv file. The spring boot batch reads data from a csv file and assigns it to the Employee class.


package com.yawintutor;

import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;

public class Employee {
	@GeneratedValue(strategy = GenerationType.AUTO)
	private int id;
	private String name;
	private int salary;
	public int getId() {
		return id;
	public void setId(int id) {
		this.id = id;
	public String getName() {
		return name;
	public void setName(String name) {
		this.name = name;
	public int getSalary() {
		return salary;
	public void setSalary(int salary) {
		this.salary = salary;

JPA Repository

The Employee JPA Repository stores employee information in the MySQL Database. Database configurations are added to the application.properties file. The JPA Repository will run the save command to save the employee data in the database.


package com.yawintutor;

import org.springframework.data.jpa.repository.JpaRepository;

public interface EmployeeRepository extends JpaRepository<Employee, Long> {


Spring Boot Batch Configurations

The batch job and batch step configurations are stored in the spring boot batch configuration file. The JdbcCursorItemReader reads data from a database table. The FlatFileItemWriter is used to write the table data as a comma separated string on each line to a csv file.

The data from the MySQL Database table will be read using the JdbcCursorItemReader reader. The read data will be converted into a java bean object. The java bean object is used to process the data. In this example, the employee object is created. The employee object is transformed to a comma-separated string and saved to a CSV file.


package com.yawintutor;

import org.springframework.batch.core.Job;
import org.springframework.batch.core.Step;
import org.springframework.batch.core.configuration.annotation.EnableBatchProcessing;
import org.springframework.batch.core.configuration.annotation.JobBuilderFactory;
import org.springframework.batch.core.configuration.annotation.StepBuilderFactory;
import org.springframework.batch.core.launch.support.RunIdIncrementer;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

public class BatchConfig {
	public JobBuilderFactory jobBuilderFactory;

	public StepBuilderFactory stepBuilderFactory;
	MyCustomReader myCustomReader;
	MyCustomWriter myCustomWriter;

	public Job createJob() {
		return jobBuilderFactory.get("MyJob")
				.incrementer(new RunIdIncrementer())

	public Step createStep() {
		return stepBuilderFactory.get("MyStep")
				.<Employee, Employee> chunk(1)

ItemReader Implementation

The Reader class is created by extending the JdbcCursorItemReader class and implementing the ItemReader Interface. The datasource and sql query should be added to the reader class JdbcCursorItemReader. To identify and convert to a java bean object, a RowMapper implementation class is required. The EmployeeRowMapper is created by implementing the RowMapper interface.


package com.yawintutor;

import java.sql.ResultSet;
import java.sql.SQLException;

import javax.sql.DataSource;

import org.springframework.batch.item.ItemReader;
import org.springframework.batch.item.database.JdbcCursorItemReader;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.jdbc.core.RowMapper;
import org.springframework.stereotype.Component;

public class MyCustomReader extends JdbcCursorItemReader<Employee> implements ItemReader<Employee>{
	public MyCustomReader(@Autowired DataSource dataSource) {
		setSql("SELECT id, name, salary FROM employee");
		setRowMapper(new EmployeeRowMapper());
	public class EmployeeRowMapper implements RowMapper<Employee> {
		public Employee mapRow(ResultSet rs, int rowNum) throws SQLException {
			Employee employee  = new Employee();
			return employee;

ItemWriter Implementation

The Writer class is created by extending FlatFileItemWriter and implementing the ItemWriter Interface. The FlatFileItemWriter class helps in the conversion of a java bean object to a csv file. The csv file name should be added to the FlatFileItemWriter class. The column names will be included in the csv file by include the FlatFileItemWriter class.


package com.yawintutor;

import org.springframework.batch.item.file.FlatFileItemWriter;
import org.springframework.batch.item.file.transform.BeanWrapperFieldExtractor;
import org.springframework.batch.item.file.transform.DelimitedLineAggregator;
import org.springframework.core.io.FileSystemResource;
import org.springframework.stereotype.Component;

public class MyCustomWriter extends FlatFileItemWriter<Employee> {

	public MyCustomWriter() {
		setResource(new FileSystemResource("data/output.csv"));
	public DelimitedLineAggregator<Employee> getDelimitedLineAggregator() {
		BeanWrapperFieldExtractor<Employee> beanWrapperFieldExtractor = new BeanWrapperFieldExtractor<Employee>();
		beanWrapperFieldExtractor.setNames(new String[] {"id", "name", "salary"});

		DelimitedLineAggregator<Employee> delimitedLineAggregator = new DelimitedLineAggregator<Employee>();
		return delimitedLineAggregator;

Scheduler class


package com.yawintutor;

import java.text.SimpleDateFormat;
import java.util.Calendar;

import org.springframework.batch.core.Job;
import org.springframework.batch.core.JobParameters;
import org.springframework.batch.core.JobParametersBuilder;
import org.springframework.batch.core.launch.JobLauncher;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Configuration;
import org.springframework.scheduling.annotation.EnableScheduling;
import org.springframework.scheduling.annotation.Scheduled;

public class SchedulerConfig {

	JobLauncher jobLauncher;

	Job job;

	SimpleDateFormat format = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss.S");

	@Scheduled(fixedDelay = 5000, initialDelay = 5000)
	public void scheduleByFixedRate() throws Exception {
		System.out.println("Batch job starting");
		JobParameters jobParameters = new JobParametersBuilder()
				.addString("time", format.format(Calendar.getInstance().getTime())).toJobParameters();
		jobLauncher.run(job, jobParameters);
		System.out.println("Batch job executed successfully\n");

Spring boot Batch Main Class

The main class for the spring boot batch will be shown as below. The default implementation will be used for the main class. The main method in this file will be executed when the spring boot batch application starts.


package com.yawintutor;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

public class SpringBootBatch4Application {

	public static void main(String[] args) {
		SpringApplication.run(SpringBootBatch4Application.class, args);

Application properties details

The application properties will be placed in the resources folder’s application.properties files. The application.properties now has two types of setting. One is for the mysql database configuration, and the other is for allowing batch-related tables to be created in the database.



Application configuration using pom.xml file

The required application configurations and dependencies are created using maven pom.xml file. The Maven POM.xml file contains dependencies for spring boot batch, JPA Repository and mysql database. If you are using some other database other than MySQL Database, you need to add the dependency to the respective database in the pom.xml file.


<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
		<relativePath/> <!-- lookup parent from repository -->
	<description>Demo project for Spring Boot</description>




How to run

When. the spring boot application starts, the following logs will be shown in the console window.

2021-07-27 12:33:35.356  INFO 55819 --- [           main] o.s.batch.core.step.AbstractStep         : Step: [MyStep] executed in 76ms
2021-07-27 12:33:35.369  INFO 55819 --- [           main] o.s.b.c.l.support.SimpleJobLauncher      : Job: [FlowJob: [name=MyJob]] completed with the following parameters: [{run.id=1, time=2021-07-27 12:32:14.800}] and the following status: [COMPLETED] in 103ms
Batch job starting
2021-07-27 12:33:40.197  INFO 55819 --- [   scheduling-1] o.s.b.c.l.support.SimpleJobLauncher      : Job: [FlowJob: [name=MyJob]] launched with the following parameters: [{time=2021-07-27 12:33:40.175}]
2021-07-27 12:33:40.215  INFO 55819 --- [   scheduling-1] o.s.batch.core.job.SimpleStepHandler     : Executing step: [MyStep]
2021-07-27 12:33:40.257  INFO 55819 --- [   scheduling-1] o.s.batch.core.step.AbstractStep         : Step: [MyStep] executed in 42ms
2021-07-27 12:33:40.265  INFO 55819 --- [   scheduling-1] o.s.b.c.l.support.SimpleJobLauncher      : Job: [FlowJob: [name=MyJob]] completed with the following parameters: [{time=2021-07-27 12:33:40.175}] and the following status: [COMPLETED] in 64ms
Batch job executed successfully

Batch job starting
2021-07-27 12:33:45.288  INFO 55819 --- [   scheduling-1] o.s.b.c.l.support.SimpleJobLauncher      : Job: [FlowJob: [name=MyJob]] launched with the following parameters: [{time=2021-07-27 12:33:45.270}]
2021-07-27 12:33:45.302  INFO 55819 --- [   scheduling-1] o.s.batch.core.job.SimpleStepHandler     : Executing step: [MyStep]

Output file

The output csv file will be created in data folder. The output.csv file is created in the data folder.



Leave a Reply