IntelliJ Idea: process is still running

Problem

So you can’t start IntelliJ IDEA because it claims IDE is already running, but of course, it’s not true?

idea_cant_run_2024-07-02_22-44

This happens to me quite often when using Intellij Idea Community Edition 2023.2.2. Of course this is happening in Linux environment.

Solution

My solution for this problem is a little bash script:


#!/bin/bash
rm `find .config/JetBrains/ -name .lock`

Write above lines in a file named like “rm_idea_lock_file.sh” in your home directory, give it a permission for execution and just run this script each time Idea doesn’t want to start.

In short what the script does is to find and remove a lock-file used by IntelliJ IDEA without hardcoding a full path to the directory where it is located, thus this script is independent of IntelliJ’s version.

Voila! 🙂
Posted in Linux | Tagged , | Leave a comment

Designing resilient-friendly REST API

For more than last 4 years I was working on a backend system consisting of about 30 microservices (Java, Spring Boot, Spring Cloud) and API gateway. This system was used by around 1 mln active customers. Making this system resilient was one of key aspects. Below I collect my thoughts on relation between resiliency and REST API design.

How we can make it resilient?

Basic tools for making a system like described above resilient are:

  1. Load Balancing – from resiliency point of view this is about redundancy and ability to route an HTTP request to a service instance that is available/healthy. Please note that this will not save a request that was directed to an instance that suddenly become unhealthy (so load balancer didn’t notice yet). Load balancing is relatively easy to get as modern deployments are often based on Kubernetes or similar which offers load balancing out of the box.
  2. Retrying – this is a strategy to save a request after a failure. There are couple of reasons why this may help:

    👉 assuming a failure is caused by a time-limited factor trying again may happen just after a recovery, so a request will be handled succesfully;

    👉 assuming there is a load balancing trying again may send the request to another instance of the service, which may be healhty.

There are other tools as well, but they are out of scope of this article.

Retrying is not that easy

You cannot just retry every HTTP request after any failure. This is because there are situations when a system state was changed already before an error happened. Just consider a request to send a bank transfer. If a failure was caused by an I/O error related to receiving HTTP response then retrying it may cause (depending on system design) a second bank transfer to be created.

Thus it’s obvious we can always retry a request that doesn’t change a system state. Usually this means HTTP GET requests are safe to retry. But not always. Example: HTTP GET request with some kind of one-time challenge-response authentication (see cryptographic nonce). In such cases one-time challenge can be verified positively only once. But this is because authentication verification is… affecting a system state.

Another case to consider: connect-timeout is a kind of failure you can retry any request after. Why? Because request didn’t reach a service and a system state was not affected. But response-timeout (read-timeout) is another story! In such case you can only retry a request that doesn’t change a system state. And please note that a response-timeout may appear as plain error (exception in Java) or… as a proper HTTP response: HTTP 504.

In general it is not easy to define proper rules how to recognize when we can retry an HTTP request.

What about retry/resilience libraries?

Yes, there are libraries available for Java to support retrying. Examples:

Yes, they are very handy. But they will not provide you a general configuration that is perfect for your case. You always have to provide a configuration for recognizing when retrying is allowed. And just using defaults may not be the best idea. Below are defaults for retry filter in Spring Cloud Gateway MVC and it’s not clear from the documentation, how it works (my guess: retrying only for HTTP GET when a reponse is HTTP 5xx or one of exceptions occured: IOException, TimeoutException):

series: 5XX series
methods: GET method
exceptions: IOException, TimeoutException and RetryException

What about idempotent requests?

According to MDN Web Docs (Mozilla Developer Network) “an HTTP method is idempotent if the intended effect on the server of making a single request is the same as the effect of making several identical requests.”. Requests that doesn’t change a system state are idempotent because their effect is “no change” and making “no change” many times is the same as making “no change” once.

So it looks like configuring retrying for all idempotent requests is a way to go. Right? Not exactly. REST APIs are designed for clients and implementation of resilience must take it into consideration as well. Example: HTTP DELETE requests should be idempotent according to MDN Web Docs. However such a request can return HTTP 400 response when a data record to be deleted doesn’t exist. So in a scenario where a first HTTP DELETE request got a timeout but removed a record and the request got retried then a client receives HTTP 404 in a situation when everything ended as desired. The only problem is the response: HTTP 404 instead of HTTP 200.

Thus please remember what MDN Web Docs say: “To be idempotent, only the state of the server is considered.” But in case of REST APIs an HTTP response received by an API client is relevant as well.

Another thing is that some sources claim that “if we want to apply retries, the operation must be idempotent”. This is not true. I will repeat: a non-idempotent operation that failed in such a way that no change was made in a system can be retried (like connect-timeout that I’ve mentioned earlier). You just need to… recognize that kind of failure.

Perfect retrying goals

Summing up, in my opinion adding retrying to a REST API implementation must keep:

  1. system state as desired by API calls made by clients – so idempotent requests can be retried and non-idempotent requests can be retried only when a failure stopped them to reach a service. In other words, retrying cannot cause changes not desired by a client.
  2. transparency for clients – so a client does not need to consider if a received HTTP response is a result of retrying the request on the backend side. In other words, retrying cannot make client more complex.

Solution

In this part I will focus only on aspects related to retrying (load balancing is good – so use it, if it makes sense in your system). From the above considerations it follows that to have a perfect retrying one needs to:

  1. recognize if an HTTP request is idempotent or not
  2. recognize if a failure happened before there was a chance for a request to change system state or not – this is relevant only for non-idempotent HTTP requests
  3. implement REST API operations in a way that supports transparent retrying from client’s perspective

Ad.1. Recognizing if an HTTP request is idempotent

There is no way one can recognize if incoming HTTP request will be handled in an idempotent way without knowing how its handling is implemented. You just cannot recognize it without a deep knowledge in advance! The only way to go is to stick strictly to a well defined convention.

I strongly suggest to stick to a convention given by MDN Web Docs which says that requests using following HTTP methods should be idempotent: GET, PUT, DELETE (as well as HEAD and OPTIONS – which are not that popular with REST APIs). But any convention is good if it is followed without exceptions.

Ad.2. Recognizing if a failure happened before a chance to change system state

This is simple: if a failure happened and a system state was not changed, then we can retry, no matter if an operation is idempotent or not. How one can be sure if a system state was not changed? There is only one such a case: a request didn’t reach a target system. Thus such cases are signaled only be certain exceptions denoting no possibility of making a TCP connection. These are:

  1. java.net.NoRouteToHostException
  2. java.net.ConnectException
  3. java.net.UnknownHostException

Ad.3. Implement REST API operations in a way that supports transparent retrying

When implementing REST API operations PUT or DELETE, choose to return HTTP response, especially HTTP status, that makes retrying transparent. For example, DELETE operation should return the same positive response when a record was present and was deleted or when record was not present.

That’s all. I hope this article will help you making better REST APIs.

Posted in Java | Tagged , , | Leave a comment

Issues with Spring Cloud

This post is a result of my 4-years (so far!) professional experience with Spring Cloud. When I started I was full of positive attitude, because I’m really delighted by Spring Framework and Spring Boot. With time passing I was observing more and more disadvantages of Spring Cloud. I was about to write about this for some time now. At some point I even wanted to give it the “Spring Cloud sucks!” title but later I thought these words are too strong and not fair in respect to creators of Spring Framework and Spring Boot – both being fantastic products! So let’s review Spring Cloud issues.

To make it clear, Spring Cloud is an umbrella to a set of libraries and my opinions are based on using following of them:

  1. Spring Cloud Gateway and Spring Cloud Gateway MVC
  2. Spring Cloud Sleuth / Micrometer Tracing
  3. Spring Cloud Load Balancer / Netflix Ribbon
  4. Spring Cloud Contract WireMock
  5. Spring Cloud Stream mainly for intergration with RabbitMQ
  6. Spring Cloud Circuit Breaker using Netflix Hystrix / Resilience4j
  7. Spring Cloud OpenFeign

The worst issue: poor backward compatibility

During abovementioned 4 years I’ve faced upgrading Spring Boot multiple of times. First from Spring Boot 2.1 to Spring Boot 2.3, then to Spring Boot 2.6, then 2.7, and finally to Spring Boot 3.2. Each of these versions is related to a different version of Spring Framework. And most of them is related to different “release trains” of Spring Cloud.

We, me and the whole team, were struggling with these upgrades. This was a strange experience. At this time I had already 10 years of experience with Spring Framework and some years with Spring Boot and I remembered that upgrading to newer versions of these libraries alone was not that bad. Then I’ve realized that only parts of our microservices that were directly depending on Spring Cloud were hard to upgrade.

Main credits to this poor backward compatibility of Spring Cloud libraries are radical changes that happened in last years:

  1. in client side load balancing area: Netflix Ribbon was replaced by Spring Cloud Load Balancer – migration was not easy
  2. in resilience area: Netflix Hystrix was replaced by Resilience4j
  3. Spring Cloud Sleuth was replaced by Micrometer Tracing – see this migration guide

But these are not the only reasons. Spring Cloud Stream v4 introduced a dramatic change of approach. So called “change from imperative style to reactive functions” caused certain abstractions (Java interfaces) and annotations to disappear while some naming conventions were changed. Just look at this migration guide!

Internal incompatibilities

Some of libraries under Spring Cloud umbrella are not fully compatible or rather do not cooperate well. And I’m talking here about runtime problems. These are rare cases, but among them I found a very important one: incompatibility between Spring Cloud Sleuth and Spring Cloud Gateway. Actually this is about Spring Cloud Sleuth not cooperating well with Project Reactor and the bug that I have faced was somehow mixed in tracing data, like in this issue on GitHub: https://github.com/spring-cloud/spring-cloud-sleuth/issues/2184

In one of GitHub issues one of Spring Cloud developers wrote a comment stating that one should avoid using Spring Cloud Sleuth with Project Reactor. TODO – give reference

Other issue I’ve faced was again related to Project Reactor (Spring Cloud Gateway) and Spring Cloud Sleuth. One of microservices had a specific code to generate HTTP traffic logs. This code started to generate some really unreadable exceptions during runtime (you know – stack traces within Project Reactor – just useless). We were not able to migrate this project from Spring Boot 2.1 to Spring Boot 2.3 or rather from Spring Cloud matching Spring Boot 2.1 to Spring Cloud matching Spring Boot 2.3. I had to jump directly to Spring Boot 2.6 and its related Spring Cloud version to make it working. A lot of time was spent trying to debug this…

Another compatibility issue within Spring Cloud is Spring Cloud WireMock not being the best way to use WireMock with Spring Boot integration tests using @SpringBootTest annotation. The problem is that WireMock configured to use a random port is somehow restarted, even in case when one Spring context is shared among all tests. Restart is causing change of port number used by WireMock and some tests are failing… but only when all are run together. What is interesting more, WireMock documentation as well is not the best source of knowledge how to use it with Spring.

BTW: the best way to use WireMock with tests using @SpringBootTest is the 2nd variant described on page https://rieckpil.de/spring-boot-integration-tests-with-wiremock-and-junit-5/

There is another issue: Spring Cloud Gateway MVC when used with circuit breaker by default is configured to use… Resilience4j Bulkhead which… breaks Micrometer Tracing in a subtle way. It’s just that trace-id is changed during handling of a request! It is revealed only if you have HTTP request/response logging showing trace-id (but this is common in some domains). You need to disable Bulkhead to get tracing working properly: spring.cloud.circuitbreaker.bulkhead.resilience4j.enabled=false

Radical turnarounds (lack of stability)

Of course I’ve covered this issue already in the section about poor backward compatibility. But I just wanted to emphasize the problem of lack of stability. In Spring Cloud development at one time whole set of components coming from Netflix open source stack was removed and replaced by something else. And actually these replacement doesn’t look better. Just different. During migration the real problem was caused by these minor differences: the replacements didn’t cover full feature set of elements that were replaced. So configuration for Netflix Ribbon could not be completely transformed into a configuration for Spring Cloud Load Balancer.

Then Spring Cloud Sleuth was replaced by Micrometer Tracing. Looks like a refactoring. OK… But what next? As of writing this blog post I’ve taken a look at https://spring.io/projects/spring-cloud-openfeign and learned what I was suspecting: OpenFeign is going to be replaced by Spring RestClient.

Lack of stability is like… lack of trust. A decision to start using a given library/framework is a kind of an investment. Commitment. A bad decision will cost more. As of now I’m not so sure one should use Spring Cloud without a good reason.

Strange abstractions

In some cases Spring Cloud is promoting strange abstractions. And some people believe it is a proper way of doing things. Good examples: Spring Cloud Stream. This library offers a really strange abstractions to integrate with a messaging technology like RabbitMQ or Kafka: binding (really? another “binding”…), binder, channel, listener… but messaging already has its own abstractions: message, queue/topic, etc. These new abstractions are only confusing. Just look at this: https://www.baeldung.com/spring-cloud-stream

It is so much simpler and more intuitive to integrate with RabbitMQ using… Spring AMQP! This is just based on intuitive abstractions for messaging. And this library is stable. On the other hand Spring Cloud Stream was just changed and the tutorial given above is actually outdated…

Another example of strange abstractions is the notion of Spring Cloud Gateway Filter Factory. Defining a new one is ridiculous. So you always need to create a factory, including empty “config” class, even if you don’t need one? And please take a look carefully how does it look to define Spring Cloud Gateway routes in code… https://www.baeldung.com/spring-cloud-custom-gateway-filters

Spring Cloud… version?

The notion of version of Spring Cloud is… not obvious. First, there are so called “release trains”. Actually nothing fancy. I don’t know why to call a major version number a “release train”. In the past “release trains” were named by words like “Hoxton” or “Greenwich”. Now they use numbers like years: 2022 or 2023. But these word-names are still somehow used… just look on the page https://spring.io/projects/spring-cloud

As you can see the page contains a… matrix. This matrix is telling you which Spring Cloud “release train” you need to choose for given version of the Spring Boot. Why this cannot be done somehow… better, automatic?

And you still have a Spring Cloud version for a given “release train”. Like “2023.0.1” is the latest version of the 2023 “release train” as of April 2024. Simple, right? But remember that individual Spring Cloud libraries have their own versions. 🙂

Growing too fast?

I’m really happy that Spring Cloud Gateway MVC was published, so I could switch from Spring Cloud Gateway that is based on Spring WebFlux, but… this library is so unfinished. So far it is still missing couple of relevant features like valid retry functionality, metadata support (relevant for setting client timeouts on per route basis) and support for modifying HTTP response body. All these links are pointing to GitHub issues which are all open as of April 2024. It looks like Spring Cloud Gateway MVC is not yet ready for a little bit more advanced use cases.

Anyway the Spring Cloud Gateway MVC is still better than Spring Cloud Gateway for which you cannot get reliable tracing… and from which you get shitty stack traces (Project Reactor…). It’s just so confusing that two libraries with so similar names (“Spring Cloud Gateway” vs “Spring Cloud Gateway MVC”) are so… not matching in supported functionality.

Summary

Spring Cloud is a big umbrella of libraries. Maybe too big. In my opinion you should think twice before binding a new project with Spring Cloud. Have a good reason to do so. Sooner or later you are going to upgrade Spring Boot version (Spring Boot is the more basic framework here) and this will lead to changing Spring Cloud “relase trains” and this can be costly. Spring Cloud is definitely not as good as Spring Framework or Spring Boot.

Posted in Java, Spring | Tagged | Leave a comment

REVISITED: The Lego Jurassic World video game on Linux

In December 2022 I’ve successfully installed the “Lego Jurassic World” video game on Linux. The caveat: this game was for MS Windows. This was a really painful process which I’ve documented on my blog. Unfortunately around one year later things got broken… Most likely because of some change in Steam runtime environment which is constantly updating itself from the server. Whatever, I could no longer run the game. Moreover I could not successfully reinstall the game using previous method, because Steam could not start because of some errors. So below I present a new, much simpler, procedure how to install this game on Linux in 2024!

DISCLAIMER: this game is tied to the Steam platform and I assume you already have a Steam account and that you have the “Lego Jurassic World” game present in your Steam library. I will not cover how to make the game present in your Steam library if you have the game on CD. Most likely the simplest way is to buy the game on Steam.

Installing Steam

To run the game you need the Steam client installed. I tried different options and the only one that was working for me on the fresh installation of Linux Mint 21.3 was to:

  1. Download a DEB package of the Steam client from Steam webpage: https://store.steampowered.com/about/
  2. Install this package using right-click and the DEB file and selecting the first menu item which was something like “Install with GDebi…”

Run Steam, click on the menu “Steam” and then select “Settings”. This will show “STEAM SETTINGS” window. Select “Compatibility” item, then turn on following options:

  • Enable Steam Play for supported titles
  • Enable Steam Play for all other titles

Then set the option “Run other titles with” to “Proton Experimental” value:

Installing the game

Install the game on Steam as usually: click on the game in the Steam library and then click on the big blue “INSTALL” button.

After the game is installed you can try if it runs out of the box. If not (which was in my case) here is a couple of hints:

  1. Switch to another graphic card: I’ve installed the game on Linux running on a laptop with two graphic cards. One is Intel HD 4000 integrated with the CPU which is Intel Core i3-3120M. The other video card is the NVidia GeForce 610M. I’ve installed the proprietary NVidia driver for this card and this gave me so called “NVIDIA Optimus” applet/icon in the tray. By clicking on this icon I can switch between both graphic cards. It turned out the game runs only when Intel graphic card is selected…
  2. Launch options: in Steam right-click on the game item on the left-side panel. Select “Properties…” from the menu that appeared.
    In the General tab you should see “LAUNCH OPTIONS” section. There is a text area, where you can enter below text, that I used:

    PROTON_USE_WINED3D=1 PROTON_NO_FSYNC=1 PROTON_LOG=1 %command%
  3. Game compatibility options: again, in Steam right-click on the game item on the left-side panel. Select “Properties…” from the menu that appeared. Go to the “Compatibility” tab. Select “Force the use of a specific Steam Pleay compatibility tool” checkbox and select “Proton 5.0-10” below.
  4. Try running the game again: this happens almost always in my case. After I’ve started the game by clicking the big green button “PLAY” in Steam, it turns blue and the button label changes to “CANCEL” or “STOP”, it seems like the game is starting. It takes some time. But then the button again turns green with “PLAY label. Just try again. I’m constantly experiencing success retrying to run the game… which is hard to explain.

After the game started I had no issues so far. 🙂

Posted in Linux | Tagged , , | Leave a comment

Linux security analysis hints

Here are my notes taken on a training I’ve had recently on Linux security analysis topic. These notes are specifically for Ubuntu-based Linux distros, especially Linux Mint.

Table of contents:


Logs

How logs are created?

  1. Directly – a process is writing to a file in the /var/log directory. This is the way Apache server or Nginx works.
  2. Using kernel – a process is passing log messages to the Linux kernel, then some demon receive them from the kernel and stores them somehow according to its configuration.
    1. In the past this mechanism was called “syslog”. The chain was: process => kernel => syslog => file…
    2. Nowadays this is called “journald”. The chain is: process => kernel => journald => some demon (if any, it can be syslog) => …
      1. Journald is writing log data to its own binary files. Storage defined in file /etc/systemd/journald.conf (property “Storage”). These binary files are stored in subdirectories of the /var/log/journal directory.
      2. Then journald passes log data to some defined demon (if any), which does whatever its configuration says.

Viewing logs

To view logs stored by journald use command: journalctl. Interacting with this program is similar to using less. Among others:

  • Press “/” or “?” to search. “/” is for searching forward and “?” is for searching backward. Enter a text to search and press Enter.
  • Arrows, Page Up, Page Down buttons work as expected. The End button moves you to the very latest log entry.
  • Press “q” to exit.

Please note that running this command as a plain user may give limited results. All logs are available for sure to the root user.

The journalctl command can be used with grep. Example: journalctl | grep sshd
The journalctl command can accept command line paremeters. Examples:

  • selecting logs belonging to a single service: journalctl -u ssh
    • A service name is the same as used with systemctl command. This is “systemd unit” name.
  • selecting logs since some time, for example for last hour: journalctl --since -1h

Viewing ssh login attempts

To see successful ssh login attempts:

journalctl -u ssh | grep Accepted

To see failed ssh login attempts:

journalctl -u ssh | grep “Invalid\|Disconnected”

Some interesting log files

  • /var/log/auth.log – authentication
  • /var/log/kern.log – kernel (diagnostics)

As on Ubuntu/Linux Mint rsyslog demon is used. Its configuration is in the file /etc/rsyslog.conf and in configuration files in the /etc/rsyslog.d directory. On Linux Mint 20.2 the file /etc/rsyslog.conf contains directive to include all *.conf files from the mentioned directory. This configuration defines (among other things) which extra files collect given categories of log messages.

Extra logs of login events

Login events are recorded in additional binary log files: /var/log/wtmp and /var/log/btmp. You use following commands to view them:

  • last – shows the list of successful login events and reboot events.
  • lastb – shows the list of unsuccessful login attempts

Auditd

Audit demon. By default present on RedHat-based Linuxes.
If present in the system, its log file is: /var/log/auditd/auditd.log

ausearch – a program to view auditd logs

SELinux

NOTE: phrase avc: denied denotes violation of some SELinux policy.

AppArmour

This is installed on default on Debian/Ubuntu-based Linuxes. It is logging into dmesg and Kernel logs.

NOTE: the phrase apparmour=”DENIED” denotes AppArmour rule violation.

Other logs

Some applications are writing logs on its own and these log files doesn’t have to be located in /var/log directory.
Some option to find them: find / -iname '*log' 2>&1
Some logs files doesn’t contain the word “log” in its file name, so it is worth to check “err” and “out” as well. Example: catalina.out.

User history

In general it is worth to review dot-files in user home directory if there is a suspicion this user done something (or rather someone done something under this user account): ls -la

User history files are written in files in user’a home directory, so it can be affected by a user. If unaffected, it can be helpful to analyse what was done. Example: .bash_history

NOTE: for user accounts created for services like PostgreSQL, MySQL, Nginx, Apache, etc. home directories are not within /home directory. The system file /etc/passwd contains location of home directories for local accounts.

Vim

File .viminfo shows which files were opened. It may contain history of commands (like replacing) user executed in Vim.

Less

File .lesshst shows what terms a user used for searching.

Others

Directories .cache and .config under user’s home directory may contain traces of running some programs. Event a time of the last modification of some file can be helpful.

Access

Accounts with passwords

Look into /etc/shadow. The second column contains ‘*’ or ‘!’, if password is not set. Service accounts should not have a password.

Who is “root”?

Root priviliges are verified based on “uid”, not on user name. Uid=0 denotes a root user account. Any user account may have set uid=0.

SSH authorized keys

NOTE: there are 2 files relevant in user’s home directory: .ssh/authorized_keys and .ssh/authorized_keys2!

Sudo

The file /etc/sudoers contains configuration which tells which user can use sudo command and how it can be used. Some of its configuration can exists as files under /etc/sudoers.d directory.

NOTE: sudo-access to any reading program is dangerous, as it allows to review fileas like /etc/shadow. Moreover the less command allows file edition, and some editors, like nano allow to execute any command!

File access mode

Don’t use file access mode “777” to solve problems.

Executables belonging to root should not have mode “u+s” or “g+s”. This is so called “setuid/setgid”. Such executables are executed with file owner (or group) privileges, no matter who run it.

NOTE: many executables correctly have this mode set. Examples: passwd, sudo.

You can find such files using command:

find / -perm /u+s,g+s

ACL

Access control list – extention of standard Unix file access model. ACLs allows to assign more groups to a file or to add access to individual users. When security breach is considered, one should check for which files in the system ACLs are configured. When files are listed with the ls -l command then files with ACL have “+” sign at the end. Example: -rw-rw-r--+

getfacl FILE – shows ACL summary for given file.

System state

Sessions

Commands to check who is logged in the system:

  • w
  • who – very similar to the above, with less details
  • loginctl – more powerful than above. Without params it just lists sessions (live w or who). With params user-status UID it shows processes run by the given user.

Processes

Commands to view processes of all users:

  • ps aux
  • ps -ef – shows PID and parent process PID, which is not shown by the above.
  • ps axjf – shows simple process trees
  • pstree – shows process trees, but without process details

NOTE: some of these commands are redundant, but it may be relevant on a system that was compromised as some tools may have been replaced.

pgrep

This command lists all processes of a given user with names/descriptions: pgrep -u UID -a

pkill

The pkill command allows to kill all processes of a given user.

/proc filesystem

Subdirectories and files in the /proc directory are representing system state. Linux kernel presents here information about running processes and information about system parameters/settings. These are not real files.

Among others, subdirectories with names being numbers are representing processes – these numbers are PIDs. Inside each such directory /proc/NNN there are files representing certain information about a process:

  • exe – link to an executable file this process is executing
  • cwd – link to a current directory of this process
  • cmdline – should hold a complete command line, but processes have the freedom to override the memory region and break assumptions about the contents or format of this file!
  • fd – a directory with links representing open file descriptors. Links to files named like /dev/pts/2 reference just console.
    NOTE: vim doesn’t hold a file descriptor open all the time if it doesn’t have to. So a file “opened” for reading in vim may not be listed here.
  • and many others…

NOTE: every process in Linux can set its own name/description and it is a fully legal operation. An attacker can use it to “hide” on a process list, for example by naming a process like [kworker]…. So watch carefully process owners.

List of all open files

lsof – the command shows all open files, by which process and which user. You may need to install it from a repository. Always run the command with “-n” parameter to skip resolving domain names for files representing open network connections.

This command can list opened connections as well!

Not-existing files

A process in Linux can write a file that seems to not exist any more. A file opened by some process may be deleted by another process or user. This doesn’t break using file’s descriptor by the first process. The lsof command shows such cases with annotation “(deleted)”.

An attacker can try to hide his files that way. Below command lists all such cases:
lsof -n | grep deleted

In such cases the df command correctly reports disk space taken by such files (deleted, but still used by processes that opened it before deletion).

Affecting commands execution

Aliases

Run command alias to see what aliases are defined. An attacker can potentially defined aliases to alter commands like ps to hide his presence. Do we really run ps program after executing ps command?

Environment variables

PATH

Very relevant variable. Can affect what we run when executing commands. The variable can depend on:

  • global configuration files in Linux /etc/environment, /etc/bashrc (this is for bash shell, other shells can have their own global configuration files), /etc/profile
  • configuration files in user’s home directory: .profile, .bashrc, .bash_profile (last two relevant for bash only)

NOTE: the configuration file /etc/environment defines environment variables present for every process.

LD_PRELOAD

Even more relevant environment variable! This variable tells programs where they should look for libraries. Attacker can replace important system libraries to hide his activity.

Similar meaning have configuration files: /etc/ld.so.conf and files inside directory /etc/ld.so.conf.d. These files contain paths, where programs are looking for libraries.

There is also a system cache in Linux that is used to load libraries. You can preview the case with the command:
ldconfig -p

How to verify if system programs are legitimate?

If in doubt if some of system executables is original, not replaced by an attacker:

  1. Locate the executable file: which ps
    In my case it gave /bin/ps, but it could be also /usr/bin/ps on some other Linux distro
  2. Find a package that contains it: dpkg-query -S /bin/ps
    If this gives nothing retry with /usr/bin/ps instead of /bin/ps, as some Linux distros are dropping the old Unix tradition of putting some executables in /bin and some in /usr/bin.
    In my case the command gave package name procps
  3. Finally verify consistency of all executables comming from that package: dpkg --verify procps
    If nothing is printed it means it’s OK.

Verification of consistency of all packages installed in the system

dpkg --verify

Files

Directories to check

Look into these directories for some unusual stuff:

  • /tmp, /var/tmp – temporary folders
  • /run, /var/run – another kind of temporary folders
  • /dev/shm – this is a RAM-disk! Usually empty. You can see it’s usage with the command
    df -h

As a root, I cannot remove some file

Check attributes of such file with command: lsattr FILE

The letter “i” in the output denotes immutable attribute. If a file has immutable attribute set, than it cannot be deleted. To do so, you need to remove immutable attribute first: chattr -i FILE

File timestamps

The command ls -l shows timestamp of file’s last modification. But depending on a file system, a single file can have many timestamps attached. To show all such timestamps use command:

stat FILE
This can show following timestamps:

  • Access – last read
  • Modify – last change of file’s content
  • Change – last change of file’s attributes, like owner, access mode
  • Birth – file’s creation

WARNING: you cannot assume all these timestamps to be valid! An attacker could modify it. Or updating these timestamps could be turned off for a given filesystem!

This can be checked be reviewing output of the mount command. If a filesystem was mounted with “noatime” parameter then it means last access timestamps are not updated. The parameter “nomtime” means last modification timestamps are not updated. The parameter “relatime” means that last access timestamps are not updated in some cases. Refer to “mount” manual (man mount) for details.

NOTE: for some time (since 2009?) in Linux “relatime” is a default mount parameter due to efficiency/power saving reasons.

TBC…

Posted in Linux, security | Tagged , , | Leave a comment

Acrobat Reader DC on Linux Mint

In my post “Acrobat Reader for Linux Mint 20.1” from 2021 I have claimed that the best version of Acrobat Reader that you can get running on Linux is Acrobat Reader 9 published around 2013. Now I can add: this is the best version of native-executable form of Acrobat Reader that you can get on Linux.

But Linux is much more powerful. It can run not only its native executables. You can run some executables made for… MS Windows. And thanks to this I will show how to run Acrobat Reader DC v2015 (published around 2016) on Linux Mint. This version has at least one advantage over Acrobat Reader 9: it can correctly verify and visualize digital signatures embedded in PDF documents.

Preparation

Using your favorite package manager install Wine and Play On Linux (package name: playonlinux). The latter is not only handy to run some some MS Windows-targeted games on Linux, but is also very handy to install and run some MS Windows applications.

Installation

Step 1: Acrobat Reader DC

  1. Run Play On Linux
  2. Click button to install new software
  3. In the search box enter “acrobat”. One item should be found: Adobe Acrobat Reader DC. Select it and click “Install” button.
  4. After a while a new window will appear with a message similar to: “Note: this script will was successfully tested with Reader DC version 2015.010.20056” (sic!). Click on “Next”.
  5. Then select to download the software
  6. Then just continue selecting “Next”/”OK”.

This will install on your Linux the Acrobat Reader DC v2015 for MS Windows. It will be installed, as any other application in Play On Linux, in its own “PlayOnLinux’s virtual drive”. You will have following directory in your home directory: PlayOnLinux's virtual drives/AdobeAcrobatReaderDC

From now you can run it, but there will be some issues with missing fonts. Rather critical: some labels/buttons/drop-down-lists will be completely without text, which makes some functions in Acrobat Reader completely unusable.

Step 2: Fonts

  1. Downloaded a ZIP package with fonts for Windows 7 from https://www.w7df.com/p/windows-7.html (maybe there are other sources)
  2. Extract all files from the ZIP package into folder in your home directory:
    PlayOnLinux's virtual drives/AdobeAcrobatReaderDC/drive_c/windows/Fonts

Now run Acrobat Reader DC again.

And voila! 🙂 Now you have Acrobat Reader DC v2015 on your Linux! It can not only verify digital signatures embedded in PDF documents but in my case, I was able to print documents as well using my HP Laser Jet printer.

P.S.

One disadvantage: the whole PlayOnLinux’s virtual drive with Acrobat Reader DC is taking 1.3 GB of disk space… What a waste! I guess this is a cost of keeping each PlayOnLinux’s application separate from others.

Posted in Linux | Tagged , , , | Leave a comment

Sending e-mail with Java 21, SMTP & SSL in 2024

Last time I wanted to implement something what seems trivial: sending an e-mail using SMTP with Java. Let’s stick to Java 21. And I’ve learned that my SMTP provider requires SSL.

This seems to be trivial because for years there is Java Mail API (package: javax.mail), which once was included in Java EE and for some time it is available as an external dependency (JAR) ready to be used in any Java project… And this is the first place where things start to get confusing, thus non-trivial!

Which dependency?

Many examples found in the internet are using this dependency for Java Mail API:


<dependency>
    <groupId>javax.mail</groupId>
    <artifactId>mail</artifactId>
    <version>1.4.7</version>
</dependency>

Examples provided by Google in documentation for their Gmail API are using the above dependency.

But there are more. Searching for “javax.mail” on Maven Repository web-page gives 36 results! There is the Java Mail API “official” web-page by Oracle, which sends a reader to 2 different web-pages. The first of them is https://javaee.github.io/javamail/, which clearly gives other dependency. But there is also a note about project being moved somewhere… The second of links given by Oracle points to a GitHub repository https://github.com/javaee/javamail, which sends a reader to another GitHub repository https://github.com/jakartaee/mail-api. So now it gets a new name: Jakarta Mail API. But wait! There is a note saying the actual implementation is not here, but in another project named “Eclipse Angus”!

This is a total chaos! Please note, that javax.mail:mail:1.4.7 was published 10 years ago (as of 2023). So it seems not the best idea, but it still works.

After careful analysis I’ve discovered that the last version of Java Mail in package javax.mail is com.sun.mail:javax.mail:1.6.2, which it pointed by the page https://javaee.github.io/javamail/. This dependency is 5 years old as 2023 and I’ve verified it is a perfect replacement of the previous one. This is the XML to declare the dependency in Maven:


<dependency>
    <groupId>com.sun.mail</groupId>
    <artifactId>javax.mail</artifactId>
    <version>1.6.2</version>
</dependency>

If you really need the current version than you need to switch to Jakarta Mail, but keep in mind it is using another package: jakarta.mail instead of javax.mail. This may be a problem, if you use another 3rd party library that is using Java Mail API. Moreover keep in mind that Jakarta Mail is divided into couple of separate JARs: API and implementation. In my opinion this should be referenced like this (the same is presented on the page https://eclipse-ee4j.github.io/angus-mail/):


<dependency>
    <groupId>jakarta.mail</groupId>
    <artifactId>jakarta.mail-api</artifactId>
    <version>2.1.2</version>
</dependency>
<dependency>
    <groupId>org.eclipse.angus</groupId>
    <artifactId>jakarta.mail</artifactId>
    <version>2.0.2</version>
    <scope>runtime</scope>
</dependency>

Please notice different groupId and different version numbers. The above were published in May 2023.

What properties?

Things are getting complex even more when we want to configure Java Mail to communicate with SMTP server requiring SSL. To create javax.mail.Session object (essential to send e-mails with Java Mail API) one needs to pass prefilled java.utils.Properties instance. Many examples say to include “mail.smtp.socketFactory.port” and “mail.smtp.socketFactory.class” properties. In my case it didn’t work at all.

Instead of these properties one needs to use “mail.smtp.ssl.enable” property set to “true”. Still it wasn’t enough in my case. I discovered I needed to add the “mail.smtp.ssl.protocols” property as well with the value of “TLSv1.2”.

For me the following set of properties was enough to allow sending e-mails by SMTP with authorization and SSL:


mail.smtp.host = your.smtp.server.address
mail.smtp.port = 465
mail.smtp.auth = true
mail.smtp.ssl.enable = true
mail.smtp.ssl.protocols = TLSv1.2

Just replace “your.smtp.server.address” by the address of your SMTP server. These properties should be accompanied with javax.mail.Authenticator instance to get the Session instance:


var session = Session.getInstance(smtpProperties, new Authenticator() {
    @Override
    protected PasswordAuthentication getPasswordAuthentication() {
        return new PasswordAuthentication(smtpUserName, smtpPassword);
    }
});

Timeout settings

With above properties my program was able to send e-mails. However production-grade software should always define timeouts, when communicating over computer network, to not end up waiting forever. I haven’t found examples using timeout configuration, but I’ve found this page: https://javaee.github.io/javamail/docs/api/com/sun/mail/smtp/package-summary.html, which documents 3 Java Mail properties responsible for timeout settings:

  • mail.smtp.connectiontimeout – Socket connection timeout value in milliseconds. This timeout is implemented by java.net.Socket. Default is infinite timeout.
  • mail.smtp.timeout – Socket read timeout value in milliseconds. This timeout is implemented by java.net.Socket. Default is infinite timeout.
  • mail.smtp.writetimeout – Socket write timeout value in milliseconds. This timeout is implemented by using a java.util.concurrent.ScheduledExecutorService per connection that schedules a thread to close the socket if the timeout expires. Thus, the overhead of using this timeout is one thread per connection. Default is infinite timeout.

Please note that default timeouts are infinite (waiting forever!) The first two properties are definitely supported by javax.mail:mail:1.4.7 (verified myself). I was not able to trigger a timeout defined by the last property. So I suggest adding at least below to set of your Java Mail properties (values are arbitrary, you can tune it to your needs):


mail.smtp.connectiontimeout = 1000
mail.smtp.timeout = 3000

Voila! 🙂

P.S.

As of 2023 SMTP with password authentication is not supported by Gmail, Yahoo and some other big/global free mailbox providers. At least not for free accounts.

Gmail supports SMTP with OAuth2 authentication scheme (which is complex), but without registering a project in Google Cloud Console and convincing Google to switch the project into a production mode (this is not available for a personal use) you can only get refresh tokens valid for 7 days.

Posted in Java | Tagged , , , , , | Leave a comment

Misleading HttpHeaders.getFirst(String)

During a development of REST API microservices with Spring Boot I’m using quite often the class HttpHeaders. In many Spring APIs this is the only way to access HTTP headers, like when developing Spring Cloud Gateway global filter and dealing with the ServerWebExchange class. You can get to HTTP headers only by using HttpHeaders object.

The problem

The problem is when one wants to deal with multi-value HTTP headers. In general each HTTP header in a single HTTP request can have many values. Most HTTP headers have just one value, like “Content-Length”, but some headers by design will have many values. And in some cases the order of these values is relevant. For example, there are use cases when one wants to process the first value of a given HTTP header. And there is a method in the HttpHeaders class that looks perfect for this and which is specially exposed in the javadoc: getFirst(String)

The javadoc for this method says: “Return the first header value for the given header name, if any.” And the problem is, that the method’s name and this documentation are both very misleading!

Explanation

HTTP headers with multiple values can be expressed in a couple of different ways:

  1. A single header line with multiple values separated by commas (it looks this is the proper way, as given by RFC-7230)
  2. Many header lines with the same header name (maybe not proper, but still possible)
  3. Any mix of above
So given there is an HTTP request with header NAME: VALUE1, VALUE2, VALUE3 what will be returned by the HttpHeaders.getFirst(..) method?

It will be the string "VALUE1, VALUE2, VALUE3"!

My problem with this is that based on the javadoc description and based on the method’s name I would expect the string "VALUE1".

Solution

To get the real first value of a given HTTP header you need to used this:

headers.getValuesAsList("NAME").get(0);

Of course this is valid only when the HTTP header is present. It can be easily checked as the list returned by the getValuesAsList(String) method is always not null. So you can call size() or isEmpty() methods on the returned list.

P.S.

Here is the related issue.

Posted in Java, Spring | Tagged , | Leave a comment

Traps with Spring Boot internationalization defaults

You don’t need to define a LocaleResolver bean

Many articles in the Web about Spring Boot internationalization starts with saying that you need to define a LocaleResolver bean. When I tried googling for Spring Boot internationalization, all top 3 results were mentioning this. In my opinion this is not needed as Spring Boot provides a default LocaleResolver bean that is interpreting “Accept-Language” HTTP header from a request. This is exactly what you need when developing a REST API service.

You can call the LocaleContextHolder.getLocale() static method to get a Locale object that corresponds to the value of “Accept-Language” HTTP request header. So you can access the requested locale without passing around parameter for this.

Resource bundle and the first default thing

To handle internationalization one usually creates a resource bundle. Usually this takes a form of a set of property files for supported languages, like: labels_en.properties (en – English), labels_pl.properties (pl – Polish), labels_dk.properties (dk – Danish) and so on.

But one can extend such a resource bundle by a file labels.properties as well. This serves as a source of translations that are not found in a part of a resource bundle specific for a given language. So it’s a kind of a default file (a fallback). So it works like this: for a given key if there is no translation in the file for a given locale then get the translation from this default file.

Let’s assume we have 4 property files in our resource bundle: labels_en.properties, labels_pl.properties, labels_dk.properties and labels.properties.

So if we’re looking for the translation for key “foo-bar” and if the given locale is PL (an HTTP request with header “Accept-Language” with “pl” value is being handled) and the file labels_pl.properties doesn’t have “foo-bar” entry then the file labels.properties will be used to find the translation.

But what if the given locale is IT (Italian)? There is no properties file for this locale in our example. Will the default properties file be used for this? No! At least not with the default setup.

Matching locale with resource bundle – the second default thing

In Spring Framework we usually use resource bundles with the help of the ResourceBundleMessageSource class. By default if there is no file in the resource bundle for the given locale, a system locale will be used to select a properties file from this resource bundle. Yes and this can be very tricky!

The tricky part is that on your computer the system locale (the JVM default locale) may be something else than the system locale on your CI/CD server or on your test/production environment. Thus outcome may be very surprising. Example: if some HTTP request had “Accept-Language” header with “it” value (Italian language request) and our resource bundle doesn’t have a file for Italian then by default a system locale will be used, so it can be English (en_US is quite common locale on server side) and then the file for English will be used for translations. It will give very different results on a machine where the system locale is Polish…

Tip

Depending on a system locale is like depending on something you usually don’t control. It’s a bad idea. Don’t let your program to depend on a system locale or a system time zone.

Solutions

There are couple of solutions:
  1. One can set a default locale for the resource bundle. This can be done in the code that sets up a Spring bean of ResourceBundleMessageSource class using setDefaultLocale(..) method.
  2. One can set the “fallbackToSystemLocale” property of ResourceBundleMessageSource bean to “false” (by default it’s “true”). This will cause that the default properties file will be used for translations for locales not covered by your resource bundle.

Example

Below is the example of Java configuration for a MessageSource bean that avoids unexpected behaviour of using a system locale as a fallback:
@Bean
public MessageSource labelsMessageSource() {
  var msgSrc = new ResourceBundleMessageSource();
  msgSrc.setBasename("labels");
  msgSrc.setDefaultEncoding("UTF-8");
  msgSrc.setDefaultLocale(Locale.ENGLISH);
  return msgSrc;
}
Posted in Java, Spring | Tagged , | Leave a comment

Hints on how to publish a Java code

As a seasoned Java programmer you want to publish some code in Java. Maybe because you believe in Open Source movement or maybe you just want to reuse this code more easily? No matter what are your motives, here are my hints on this topic.

Disclaimer: I’m using Maven, GitHub and jgitver.

Java package

A published Java code should definitely belong to a non-default package. The name of the package should somehow point to you, the author of the code, or to some group of authors or to some organisation where authors are cooperating. Here I will focus on the first case: a good Java package name for an individual. A good Java package name is nice and unique and traditionally the latter is achieved by using reversed domain name. So… do you need your personal domain name? Maybe not really…

I think GitHub helps here a lot. It allows you to create a web page having a nice domain name containing your GitHub user name. Assuming your user name on GitHub is XYZ, you can get a domain XYZ.github.io which in turn makes you sure, that the Java package io.github.XYZ is unique! Just do the following:

Now you can use the domain created by GitHub for your site for naming your Java packages.

Git repository

For this you use GitHub again. Just create a new repository for your Java code you that you want to publish. Points not to miss here are:

👉 Short summary: Now you can clone the newly created repository to your computer and write your Java code. Pushing to this repository means publishing your code for the whole world!

GitHub Personal Access Token

If you decide to clone the repository by HTTPS then you need to create a Personal Access Token first. A token is just a text that you use instead of your password (for example. with git push command), but on its creation you decide which repositories the token allows to access and in what scope. I find it useful to select following permissions when creating a token:

Read and Write access to actions, code, commit statuses, pull requests, and workflows

Don’t expose rubbish

Remember to add the .gitignore file in the root directory of your project/repository. The idea is to avoid accidental commit&push of files not belonging to pure source code. A sample content:

target
.idea

Tests run by CI

It’s better to show others, that your Java code compiles correctly and fulfills all requirements not only on your computer, but on a Continuous Integration server (like GitHub) as well. For this we can use something called GitHub Actions. It’s a workflow defined by a dedicated file in your repository (similar things exist for GitLab and Jenkins).

Simplest way: create a file .github/workflows/maven.yaml (I guess filename can be different, but the path must be exactly like this) in your repository and put this content into the file, assuming your master branch is called “main”:

name: 'Java CI with Maven'
on:
  push:
    branches:
      - main
  pull_request:
    branches:
      - main
jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
        with:
          fetch-depth: 0
      - name: 'Set up JDK'
        uses: actions/setup-java@v3
        with:
          java-version: '11'
          distribution: 'adopt'
          cache: maven
      - name: 'Build with Maven'
        run: mvn -B verify

NOTE: To push a commit modifying a GitHub workflow file your Personal Access Token must cover workflow permission/scope.

It may help to look on the documentation for the actions/setup-java step.

NOTE 2: If your workflow doesn’t start, please read the article “My GitHub Actions workflows are not starting”. It helped in my case.

👉 Short summary: as of now you have published your Java code on GitHub, it uses nice and unique Java package name, it has licence and README file, so people know what it is and how they’re allowed to use it. And it has CI process showing your code is OK.

Still you cannot easily use it as a Maven dependency in another project. Can GitHub help us more?

The “Release” functionality on GitHub

When I started exploring GitHub, the “Releases” pane and the “Create a new release” link grabbed my attention. You can find it as pictured below:

It turned out the “Release” functionality does two important things:

  1. it creates lightweight tag in Git – this is worth to note it here: a tag created is not an annotated tag
  2. it triggers a GitHub workflow if such one is defined

You create a release manually, by clicking this marked link “Create a new release”. Then you provide:

  • a version number (tag name)
  • a release title
  • a description
  • optionally you can upload some files to be attached to the new release

The created release will just contain ZIP and TAR.GZ files with snapshot of your code being tagged.

👉 Short summary: Now you have versioning, but this is using lightweight tags. Still you cannot easily use your published code as a Maven dependency in another project. Sounds disappointing, but we’re on track.

Publishing artifact on GitHub Packages

GitHub Packages is an alternative to the Maven Central Repository (and some other popular repositories). After I started reading about how to publish your artifact on the Maven Central I have quickly realized that I would like something simpler for my own needs.

The relevant feature of my solution is integration with jgitver. This is a tool that dynamically generates a version number that is obtained from Git and this version is used by Maven during a build. So you don’t need to update <version> tag in your main pom.xml file. Generally you can keep a pseudo-version in your pom.xml all the time:

<!-- The final version is provided by jgitver-maven-plugin -->
<version>0.0.0</version>

This is significantly different than manually updating version in the pom.xml file or using maven-release-plugin.

NOTE: jgitver needs an annotated tag to generate a non-snapshot version number!

Because of this I decided the GitHub workflow for releasing needs to convert a lightweight tag, created as part of a new release on GitHub, when one enters new version number as a tag name, into an annotated tag, that will cause jgitver to generate a proper non-snapshot version number.

So to add publishing (deploying) artifact built from your Java code to GitHub Packages and using jgitver you need 4 changes in your project:

  1. adding Maven extensions configuration file – to use jgitver
  2. adding jgitver configuration file
  3. adding GitHub repository in <distributionManagement> of your pom.xml
  4. adding new GitHub workflow

1) Maven’s extensions.xml

Create a file with path .mvn/extensions.xml with following content:

<extensions xmlns="http://maven.apache.org/EXTENSIONS/1.0.0" 
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="http://maven.apache.org/EXTENSIONS/1.0.0 
        http://maven.apache.org/xsd/core-extensions-1.0.0.xsd">
    <extension>
        <groupId>fr.brouillard.oss</groupId>
        <artifactId>jgitver-maven-plugin</artifactId>
        <version>1.9.0</version>
    </extension>
</extensions>

2) jgitver configuration file

Create a file with path .mvn/jgitver.config.xml with following content:

<configuration xmlns="http://jgitver.github.io/maven/configuration/1.1.0"
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="http://jgitver.github.io/maven/configuration/1.1.0
        https://jgitver.github.io/maven/configuration/jgitver-configuration-v1_1_0.xsd">
    <nonQualifierBranches>master,main</nonQualifierBranches>
</configuration>

3) pom.xml

Just add the below fragment to the main pom.xml and replace USER by your GitHub user name and REPO by the name of your repository (but don’t change repository “id”):

<distributionManagement>
	<repository>
		<id>github</id>
		<name>GitHub Packages</name>
		<url>https://maven.pkg.github.com/USER/REPO</url>
	</repository>
</distributionManagement>

4) GitHub workflow for publishing

Create a new YAML file in .github/workflows directory of your project. The filename is up to you. The file should have following content with words EMAIL and USER replaced by your e-mail address and GitHub user name:

name: 'Publish package to GitHub Packages'
on:
  release:
    types: [created]
jobs:
  publish:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      packages: write
    steps:
      - uses: actions/checkout@v3
        with:
          fetch-depth: 0
      - name: 'Fetch all tags'
        run: git fetch --depth=1 origin +refs/tags/*:refs/tags/*
      - name: 'Convert lightweight tag to annotated tag'
        run: |
          git config --global user.email "EMAIL"
          git config --global user.name "USER"
          LAST_TAG=$(git tag --sort=v:refname|tail -1)
          LAST_ANNOTATED_TAG=$(git describe || echo "")
          if [ "$LAST_TAG" != "$LAST_ANNOTATED_TAG" ]; then git tag -a -f $LAST_TAG -m "$LAST_TAG" ; else echo "NOP"; fi
      - name: 'Set up JDK and Maven'
        uses: actions/setup-java@v3
        with:
          java-version: '11'
          distribution: 'adopt'
      - name: 'Publish package with Maven'
        run: mvn -B deploy
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          JGITVER_BRANCH: main

Please note the step “Convert lightweight tag to annotated tag”! Thanks to this GitHub Release functionality is cooperating correctly with jgitver.

Push these changes. You will notice almost nothing new. But after clicking on Actions you will see the new workflow:

This workflow is triggered… by creating a new release as described earlier! Now creating a new release means not only creating a Git tag with version number, but also running this workflow, which in turn executes mvn deploy command with preselected repository id as “github”. Refer to the documentation for details.

Using artifact from GitHub Packages

Unfortunately artifacts on GitHub Packages are not available for everybody. Other people can see and download your code, even a JAR file, but they cannot refer to your artifact from their pom.xml files. This is because token authentication is required by GitHub Packages to fetch artifacts.

But this is not a problem for you, when you just want to reuse your artifact as a dependency for couple of your projects. You already have the token for accessing your GitHub repository. The same token is valid for accessing your artifact on GitHub Packages. All you need to do is to create a Maven settings file and to add repository definition your pom.xml.

The Maven settings file is ~/.m2/settings.xml and following content is enough to setup authentication for GitHub Packages:

<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0
                      http://maven.apache.org/xsd/settings-1.0.0.xsd">
  <servers>
    <server>
      <id>REPO_ID_IN_POM_XML</id>
      <username>USER</username>
      <password>TOKEN</password>
    </server>
  </servers>
</settings>

Then in your pom.xml file you just need to add:

    <repositories>
        <repository>
            <id>REPO_ID_IN_POM_XML</id>
            <url>https://maven.pkg.github.com/USER/REPO</url>
        </repository>
    </repositories>
Posted in Java | Tagged , , , , | 1 Comment