Traditionally, most scheduled duties in Java purposes I’ve labored on have used Spring’s scheduling feature. Spring handles strategies that you simply annotate with @Scheduled
within the background of the applying. This works wonderful if just one occasion of the applying is working.
Nevertheless, purposes are more and more changing into containerized and are being run in container orchestration platforms, similar to Kubernetes, to reap the benefits of horizontal scaling in order that a number of cases of an software are working. This creates an issue in the best way scheduled duties have been used traditionally: As a result of scheduled duties are run within the background of the applying, now we have duplicated (and probably competing) scheduled duties as we horizontally scale the applying.
To handle this downside of scaling Java scheduled duties in Kubernetes, I’ve created a new pattern that works with three well-liked open supply dependency injection frameworks: Spring Boot, Micronaut, and Guice with Java Spark. Let’s stroll by way of the state of affairs under to grasp the sample.
The Situation
Let’s say now we have a requirement to run some enterprise logic that lives within the service layer of a Spring Boot API as a scheduled process. For the needs of this text, let’s say the service seems to be like this:
@Service public HelloService
public String sayHello() return “Howdy World!”;
|
Traditionally, we’d accomplish this by writing a category within the Spring Boot API that calls the service logic and annotate a way with @Scheduled
, like so:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
@Element @Slf4j public class ScheduledTasks
non-public last HelloService helloService;
@Autowired public ScheduledTasks(HelloService helloService) this.helloService = helloService;
@Scheduled(cron = “0 8 * * MON-FRI”) public void runHelloService() String hi there = this.helloService.sayHello(); log.data(hi there);
|
Whereas this answer is easy, it limits our means to scale the applying horizontally in a contemporary container orchestration platform like Kubernetes. As this API horizontally scales to 2, 3, 4 … n pods, we’ll have 2, 3, 4 … n scheduled duties duplicating the identical scheduled process logic, which may trigger duplicated logic, race conditions and inefficient use of sources.
There are answers like ShedLock and Quartz that deal with this downside. Each ShedLock and Quartz use an exterior database to permit solely one of many scheduled duties within the n pods to execute at a given time. Whereas this method works, it requires an exterior database. Additionally, an occasion of the scheduled process nonetheless runs in every pod, which consumes software/pod reminiscence, though solely one among them will execute its enterprise logic. We are able to enhance these options by eliminating the a number of scheduled process cases altogether.
Is There a Higher Solution to Schedule Duties in Kubernetes?
Sure, with Kubernetes CronJob. We are able to overcome these disadvantages by separating the considerations of working the scheduled process and serving the applying. This requires us to show the service logic as an API endpoint by writing a controller that calls the service logic, like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
@RestController public MyController
non-public last HelloService helloService;
@Autowired public MyController(HelloService helloService) this.helloService = helloService;
@PostMapping(“/hi there”) public ResponseEntity<String> sayHello() String hi there = this.helloService.sayHello(); return ResponseEntity.okay(hi there);
|
Subsequent, we create a CronJob
useful resource that can name this new endpoint on a set schedule:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
apiVersion: batch/v1 form: CronJob metadata: title: hi there spec: schedule: “0 8 * * MON-FRI” jobTemplate: spec: template: spec: containers: – title: hi there picture: busybox:1.28 imagePullPolicy: IfNotPresent command: – /bin/sh – –c – curl –X POST http://path.to.the.java.api/hi there restartPolicy: OnFailure |
Now now we have a horizontally scalable answer.
Nevertheless, what if now we have a regulation that forestalls us from exposing HelloService as an API endpoint? Or what if the safety group mentioned that we have to retrieve a JSON Internet Token (JWT) and put it within the curl request’s Authorization header earlier than calling the API endpoint? At greatest, it could require extra time and shell experience than the group may need and, at worst, this could make the above answer infeasible.
Is There an Even Higher Solution to Schedule Duties in Kubernetes?
Sure. We are able to alleviate these considerations by utilizing Java’s a number of entry factors function.
Nevertheless, the distinctive problem in our case is that the service logic lives in a Spring Boot API, so sure Spring dependency injection logic must execute in order that the service layer and all its dependencies are instantiated earlier than an alternate entry level is executed.
How can we give Spring Boot the time it must configure the applying earlier than we run the choice entry level? I discovered that the code under accomplishes this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
@SpringBootApplication public class SpringBootEntryPoint {
public static void predominant(String[] args) ConfigurableApplicationContext applicationContext = SpringApplication.run(SpringBootEntryPoint.class, args);
/* * If an alternate entry level atmosphere variable exists, then decide if there may be enterprise logic that’s mapped to * that property. If that’s the case, run the logic and exit. If an alternate entry level property doesn’t exist, then * enable the applying to run as regular. */ Non-compulsory.ofNullable(System.getenv(“alternativeEntryPoint”)) .ifPresent( arg –> int exitCode = 0;
strive(applicationContext) if (arg.equals(“sayHello”)) String hi there = applicationContext.getBean(HelloService.class).sayHello(); System.out.println(hi there);
else throw new IllegalArgumentException( String.format(“Didn’t acknowledge alternativeEntryPoint, %s”, arg) );
catch (Exception e) exitCode = 1; e.printStackTrace();
lastly System.out.println(“Closing software context”);
/* If there may be an alternate entry level listed, then we all the time need to exit the JVM so the spring app doesn’t throw an exception after we shut the applicationContext. Each the applicationContext and JVM needs to be closed/exited to stop exceptions. */ System.out.println(“Exiting JVM”); System.exit(exitCode); );
} |
This sample additionally works with different Java frameworks similar to Micronaut and Guice with Java Spark, so it’s comparatively framework agnostic. Beneath is identical sample utilizing Micronaut:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
public class MicronautEntryPoint {
public static void predominant(String[] args) ApplicationContext applicationContext = Micronaut.run(MicronautEntryPoint.class, args);
/* * If an alternate entry level atmosphere variable exists, then decide if there may be enterprise logic that’s mapped to * that property. If that’s the case, run the logic and exit. If an alternate entry level property doesn’t exist, then * enable the applying to run as regular. */ Non-compulsory.ofNullable(System.getenv(“alternativeEntryPoint”)) .ifPresent( arg –> int exitCode = 0;
strive(applicationContext) if (arg.equals(“sayHello”)) String hi there = applicationContext.getBean(HelloService.class).sayHello(); System.out.println(hi there);
else throw new IllegalArgumentException( String.format(“Didn’t acknowledge alternativeEntryPoint, %s”, arg) );
catch (Exception e) exitCode = 1; e.printStackTrace();
lastly System.out.println(“Closing software context”);
/* If there may be an alternate entry level listed, then we all the time need to exit the JVM so the spring app doesn’t throw an exception after we shut the applicationContext. Each the applicationContext and JVM needs to be closed/exited to stop exceptions. */ System.out.println(“Exiting JVM”); System.exit(exitCode); );
} |
The one main distinction is that the category doesn’t want an annotation, and the Micronaut equivalents of Spring strategies are used (ex: Micronaut#run
).
Right here is identical sample utilizing Guice and Java Spark:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
public class GuiceEntryPoint {
non-public static Injector injector;
public static void predominant(String[] args) GuiceEntryPoint.injector = Guice.createInjector(new GuiceModule());
/* * If an alternate entry level atmosphere variable exists, then decide if there may be enterprise logic that’s mapped to * that property. If that’s the case, run the logic and exit. If an alternate entry level property doesn’t exist, then * enable the applying to run as regular. */ Non-compulsory.ofNullable(System.getenv(“alternativeEntryPoint”)) .ifPresent( arg –> int exitCode = 0;
strive if (arg.equals(“sayHello”)) String hi there = injector.getInstance(HelloService.class).sayHello(); System.out.println(hi there);
else throw new IllegalArgumentException( String.format(“Didn’t acknowledge alternativeEntryPoint, %s”, arg) );
catch (Exception e) exitCode = 1; e.printStackTrace();
lastly System.out.println(“Closing software context”);
/* If there may be an alternate entry level listed, then we all the time need to exit the JVM so the spring app doesn’t throw an exception after we shut the applicationContext. Each the applicationContext and JVM needs to be closed/exited to stop exceptions. */ System.out.println(“Exiting JVM”); System.exit(exitCode); );
/* Run the Java Spark RESTful API. */ injector.getInstance(GuiceEntryPoint.class) .run(8080);
void run(last int port) last GoodByeService goodByeService = GuiceEntryPoint.injector.getInstance(GoodByeService.class);
port(port);
get(“https://information.google.com/”, (req, res) –> return goodByeService.sayHello(); );
} |
The principle variations are that you simply retrieve the beans from the Guice Injector
quite than from an ApplicationContext
object like in Spring and Micronaut, and that there’s a run
methodology that accommodates all of the controller endpoints quite than there being a controller class.
You’ll be able to see these code samples and run them by following the instructions on this repo’s README.
In every of those examples, you’ll discover that I management whether or not the choice entry level’s logic is invoked by checking if an atmosphere variable exists and, if it does exist, what its worth is. If the atmosphere variable doesn’t exist or its worth just isn’t what we count on, then the HelloService
bean is not going to be retrieved from the ApplicationContext
or the Injector
(relying on the framework getting used) and won’t be executed. Whereas this isn’t precisely an alternate entry level, it features in an identical approach. As an alternative of utilizing a number of predominant
strategies like conventional different entry factors, this sample makes use of a single predominant
methodology and makes use of atmosphere variables to manage the logic that’s executed.
Word that when utilizing Spring and Micronaut, the applicationContext
is closed utilizing strive
with sources, no matter whether or not the service methodology name executes efficiently or throws an Exception
. This ensures that if an alternate entry level is specified, it’ll all the time outcome within the software exiting. It will stop the Spring Boot software from persevering with to run to service HTTP requests with the controller API endpoints.
Final, we all the time exit the JVM if an alternate entry level atmosphere variable is detected. This prevents Spring Boot from throwing an Exception
as a result of the ApplicationContext
is closed however the JVM remains to be working.
Successfully, this answer permits dependency injection to happen earlier than the entry level routing logic happens.
This answer permits us to jot down a Kubernetes CronJob
useful resource that makes use of the identical docker picture that we’d use if we had been to run the Spring Boot software as an API, however we merely add an atmosphere variable within the spec as seen under.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
apiVersion: batch/v1 form: CronJob metadata: title: my–service spec: schedule: “0 8 * * MON-FRI” jobTemplate: spec: template: spec: containers: – title: hi there–service picture: helloImage:1.0.0 # That is the Java API picture with the second entry level. imagePullPolicy: IfNotPresent env: – title: alternativeEntryPoint worth: “helloService” restartPolicy: OnFailure |
By utilizing a Kubernetes CronJob
, we will assure that just one scheduled process is working at any given time (supplied that the duty is scheduled with ample time between invocations). As well as, we didn’t expose HelloService
by way of an API endpoint or want to make use of shell scripting — every little thing was carried out in Java. We additionally eradicated duplicated scheduled duties as an alternative of managing them.
I like to visualise this sample as making a jar act like a Swiss Military knife: Every entry level is sort of a instrument within the Swiss Military knife that runs the jar’s logic differently. Simply as a Swiss Military knife has totally different instruments, like a screwdriver, knife, scissors, and so on., so does this sample make a jar act on its embedded enterprise logic as a RESTful API, scheduled process, and so on.
FAQs
Query:
Wouldn’t or not it’s simpler to jot down a @Scheduled methodology and disable it based mostly on some configuration property?
Reply:
First, it’s price contemplating that different frameworks like Micronaut would not have the flexibility to disable a @Scheduled
methodology. Furthermore, Java Spark can’t schedule duties. Alternatively, the sample described on this article (I’ll name it the Swiss Military knife sample) works throughout extra frameworks than simply Spring.
However even when your undertaking does use Spring, one of many predominant disadvantages I see in utilizing @Scheduled
generally is that we’re requiring the Spring app to run 24/7 to ensure that the Spring process scheduler to run and invoke the @Scheduled process based mostly on the cron schedule. This is able to require a Kubernetes pod that’s working 24/7 with the Spring app working inside it. I see this use of sources (and doubtless cash) as pointless as a result of Kubernetes gives its personal process scheduler that we will reap the benefits of by making a CronJob
useful resource. Kubernetes sources will solely be used for the lifetime of the CronJob
quite than having a pod working always with the @Scheduled
process inside it.
In different phrases, I liken the @Scheduled
and CronJob
choices to this: We wouldn’t spin up an EC2 occasion and create a cronjob on the EC2 occasion that invokes a Lambda operate as a result of we will invoke a Lambda operate with a CloudWatch cron rule. One of many explanation why we don’t do it is because the EC2 occasion could be dearer in comparison with the free CloudWatch rule. Just like the EC2 occasion on this instance, I see a @Scheduled
pod as an pointless provisioning of sources as a result of we have already got a scheduling instrument accessible in Kubernetes’ CronJob
(which is like CloudWatch cron guidelines).
Query:
Does this sample work in a multicluster atmosphere?
Reply:
This sample has not been examined in a multicluster atmosphere, and it doubtless wouldn’t work as a result of this sample doesn’t embody a approach for a scheduled process working in Cluster A to concentrate on one other occasion of the scheduled process working in Cluster B. Quartz and ShedLock use an exterior, centralized database to orchestrate these multicluster scheduled duties. This sample doesn’t embody an exterior database.