Project Reactor’s Mono, CompletableFuture and retrying

The Mono type (Project Reactor) provides some methods for retrying. And this looks nice. Seems that when you have a Mono object you can easily add retrying functionality, like:

myMono = myMono.retryWhen(Retry.backoff(2, Duration.ofMillis(500));

On the other hand one can create a Mono object from a CompletableFuture or CompletableStage, which seems handy when one needs to process a result returned in such a form by some library or some other code that you can’t just change:

myMono = Mono.fromCompletionStage(funReturningCompletableFuture());

So what will happen if you want to add retrying to the above? Like:

myMono = Mono.fromCompletionStage(funReturningCompletableFuture())
        .retryWhen(Retry.backoff(2, Duration.ofMillis(500));

It turns out retrying will not work in this case! The above code will silently work as if retrying is not applied. And I haven’t spot a single word about this in Project Reactor documentation.

After rethinking this seems obvious. How could a Mono object know how to retry a procedure when only its result representation is known in a form of a CompletionStage reference? The CompletionStage interface doesn’t expose any method for retrying at all.

Solution

The solution is to use another variant of “fromCompletionStage” factory method, that is accepting a CompletionStage supplier (given by lambda in the example below):

myMono = Mono.fromCompletionStage(() -> funReturningCompletableFuture())
        .retryWhen(Retry.backoff(2, Duration.ofMillis(500));

Now retrying will work as expected!

Posted in Java, multithreading | Tagged , , , , | Leave a comment

Hints for Spring Cloud Load Balancer

Some time ago Spring Cloud dropped integration with Netflix Ribbon client-side load balancer solution and replaced it with its own: Spring Cloud Load Balancer. This move posed quite a challenge for those who want to migrate from older Spring Boot/Spring Cloud and who are forced to use client-side load-balancing. Moreover in my opinion the documentation available is not clear enough.

DISCLAIMER: In this post I’m referring to Spring Cloud from release train 2021.0.1.

So here are coupe of hints from me:

DiscoveryClient or ReactiveDiscoveryClient?

A “discovery client” is the part of the puzzle of client-side load-balancing which is responsible for establishing addresses of individual service instances. You don’t need to carry much about this when you can use Eureka or Consul. There are Spring Cloud-provided implementations of discovery clients for these cases and I will not cover them. Instead I will focus on a harder case when you need to provide your own implementation of a discovery client.

The first confusion is when you realize there are 2 unrelated interfaces in Spring Cloud Commons for implementing discovery clients, both in package org.springframework.cloud.client.discovery:

  1. DiscoveryClient – for blocking approach
  2. ReactiveDiscoveryClient – for non-blocking (reactive) approach

If you’re not sure which of them is going to be used, you need to… provide both implementations. When you want to select one approach then you get confused for the 2nd time, as there are 2 separate properties for this: spring.cloud.discovery.blocking.enabled and spring.cloud.discovery.reactive.enabled. What if both properties are set to false? Is this the same as setting spring.cloud.discovery.enabled=false?

I didn’t dig into this and stayed with default values which seems to be both approaches are turned on. So it seems DiscoveryClient implementation should be used with blocking HTTP clients like RestTemplate or Feign and ReactiveDiscoveryClient should be used with reactive HTTP client like WebClient. Or maybe it’s some other way? It’s not clear.

In some of my tests (using @SpringBootTest) it turned out that RestTemplate was using blocking discovery client while in some others it was using reactive discovery client (the latter most likely caused by presence of spring-web-flux dependency). In all my tests both DiscoveryClient and ReactiveDiscoveryClient were available in the Spring context.

Hints

πŸ‘‰ When developing your own discovery client you need to deliver 2 implementations: implementation of DiscoveryClient and implementation of ReactiveDiscoveryClient.

πŸ‘‰ Alternatively you can try disabling blocking client (spring.cloud.discovery.blocking.enabled = false) or reactive client (spring.cloud.discovery.reactive.enabled = false) and then implementing only the other one (I did’t try this).

πŸ‘‰ If WebClient is not present in the classpath (no dependency to spring-web-flux), then you can skip ReactiveDiscoveryClient implementation.

πŸ‘‰ You don’t need to use the @EnableDiscoveryClient annotation if you create Spring beans from classes implementing discovery clients.

What is a composite discovery client?

Quite surprising Spring Cloud feature is that it creates additional beans implementing DiscoveryClient and ReactiveDiscoveryClient interfaces that act as umbrellas over all discovery clients and all reactive discovery clients respectively. These are so called “composite discovery clients”. And yes, there are two: blocking ComposingDiscoveryClient and non-blokcing CompositeReactiveDiscoveryClient.

As a result in the Spring context, after adding your custom implementations – let’s call them MyOwnDiscoveryClient and MyOwnReactiveDiscoveryClient, you will have:

  1. CompositeDiscoveryClient – a default bean implementing DiscoveryClient. It contains a list of all non-composite blocking discovery clients from the application context.
  2. SimpleDiscoveryClient – a bean implementing DiscoveryClient and having some low priority
  3. MyOwnDiscoveryClient – your implementation with default priority which by design is higher than abovementioned bean’s priority
  4. ReactiveCompositeDiscoveryClient – a default bean implementing ReactiveDiscoveryClient. It contains a list of all non-composite non-blocking discovery clients in the applications context.
  5. SimpleReactiveDiscoveryClient – a bean implementing ReactiveDiscoveryClient and having some low priority
  6. MyOwnReactiveDiscoveryClient – your implementation with default priority which by design is higher than abovementioned bean’s priority

These composite discovery clients are delegating to non-composite discovery clients, invoking them in a loop in order determined by Spring beans ordering. First non-empty response is returned. In our example: CompositeDiscoveryClient will first invoke MyOwnDiscoveryClient. If it’s response is non-empty it will be returned. Otherwise SimpleDiscoveryClient will be invoked.

Hints

πŸ‘‰ Usually you don’t need to care about CompositeDiscoveryClient or CompositeReactiveDiscoveryClient. You may think of them as wrappers that are invoking your custom discovery client implementation.

πŸ‘‰ But when creating a new, shiny unit test with @SpringBootTest annotation to verify your own discovery client implementation don’t be surprised what you will get by:

@Autowired
DiscoveryClient someClient;

Of course someClient reference will point to CompositeDiscoveryClient! Use MyOwnDiscoveryClient instead of DiscoveryClient to make Spring injecting your desired bean, as in this case there are 3 beans implementing DiscoveryClient interface, but CompositeDiscoveryClient is default bean (it has @Primary annotation).

Properties or Java configuration?

Actually both are needed. Properties have some default values and Java-config has a default setup of ServiceInstanceListSupplier component. You have to realize that you can use only Spring Cloud Load Balancer properties which are used by elements defined by this setup. Closer analysis of the file LoadBalancerClientConfiguration.java reveals that the default ServiceInstaceListSupplier is built by (assuming @Conditional(DefaultConfigurationCondition.class) is triggered):

ServiceInstanceListSupplier.builder()
    .withDiscoveryClient()
    .withCaching()
    .build(context);

This means default Spring Cloud Load Balancer cooperates with a discovery client (Q: how it could possibly event work without a discovery client?) and supports caching of discovered instances. Obviously this setup will not handle health-check related properties (spring.cloud.loadbalancer.health-check.*).

Hints

πŸ‘‰ If you are OK with a default Spring Cloud Load Balancer (cooperating with a discovery client and optionally with caching), then you don’t need custom Java-config.

πŸ‘‰ If you want to use health-checking of service instances, then some custom Java-config is needed. I found it especially useful to setup specific WebClient instance for health-check requests. Such Java-config class needs to contain a @Bean-annotated method returning ServiceInstanceListSupplier. Example:

public class CustomLoadBalancerConfig {
  @Bean
  public ServiceInstanceListSupplier myInstancesSupplier() {
    return ServiceInstanceListSupplier.builder()
      .withDiscoveryClient()
      .withHealthChecks(buildCustomWebClient())
      .build(context);
  }
}

πŸ‘‰ Remember that custom Java-config for Spring Cloud Load Balancer cannot be a Spring bean. So it cannot have @Configuration annotation. If you miss this, it will be a source of mysterious errors.

πŸ‘‰ That custom Java-config class must be referenced only in @LoadBalancerClient or @LoadBalancerClients annotation, which itself is placed on some normal Java-config class with @Configuration annotation or on the main class (with @SpringBootApplication).

πŸ‘‰ Actually I find the @LoadBalancerClients annotation the best way to define general configuration for all load balanced clients.

Posted in Java, Spring | Tagged , | Leave a comment

How to run RabbitMQ with a predefined queue using docker-compose?

Docker-compose is a very powerful tool to run services needed by a program one develops. In a typical enterprise scenario a program (usually called micro-service these days) is integrated with many services like a database, a messaging broker, etc. With docker-compose one can start all of such service on his/her local computer to quickly verify how things are working together.

I knew that for some years but just today I needed to run RabbitMQ such that… it has a preconfigured message queue. Some library in a program I was modifying, prepared by developers who do no longer work in the company, was complaining for a missing message queue. So I just wanted to have RabbitMQ with this message queue in my Docker Compose setup.

The problem

How to run the RabbitMQ docker image, like “rabbitmq:3.9-alpine”, using docker-compose, such that there is a preconfigured queue and all this without building a custom RabbitMQ image?

The solution

First, I found the article “Creating a custom RabbitMQ container with preconfigured queues” which gave me an idea how to handle the part about starting RabbitMQ with a preconfigured queue when using Docker. This is a great article… except that this didn’t work for RabbitMQ in version 3.9. And except that the article is about building a custom Docker image which I wanted to avoid.

So I created my own solution which consists of 4 files: one docker-compose file and 3 configuration files for RabbitMQ.

1st file: docker-compose.yml

This file shows a crucial step to avoid creating a custom Docker image for RabbitMQ. Thanks to “volumes” directive in Docker Compose we are providing required configuration files to standard RabbitMQ image.

version: '3'
services:
  localRabbitMQ:
    image: "rabbitmq:3.9-alpine"
    ports:
      - 5672:5672
    volumes:
      - type: bind
        source: ./rabbitmq-enabled-plugins
        target: /etc/rabbitmq/enabled_plugins
      - type: bind
        source: ./rabbitmq.config
        target: /etc/rabbitmq/rabbitmq.config
      - type: bind
        source: ./rabbitmq-defs.json
        target: /etc/rabbitmq/rabbitmq-defs.json

2nd file: rabbitmq-enabled-plugins

The purpose of this file is to enable some plugins in RabbitMQ. In our case “rabbitmq_management” is the only plugin we need. As one can see above this file must be placed as /etc/rabbitmq/enabled_plugins in the RabbitMQ container.

[rabbitmq_management].

3rd file: rabbitmq.config

This one is just a copy from the abovemetioned article.

[
  {
    rabbit,
      [
        { loopback_users, [] }
      ]
  },
  {
    rabbitmq_management,
      [
        { load_definitions, "/etc/rabbitmq/rabbitmq-defs.json" }
      ]
  }
].

4th file: rabbitmq-defs.json

This file is again a copy from the abovementioned article. Replace “YOUR-QUEUE-NAME” with whatever you need.

{
  "exchanges": [
    {
      "name": "YOUR-QUEUE-NAME",
      "vhost": "/",
      "type": "fanout",
      "durable": true,
      "auto_delete": false,
      "internal": false,
      "arguments": {}
    }
  ],
  "queues": [
    {
      "name": "YOUR-QUEUE-NAME",
      "vhost": "/",
      "durable": true,
      "auto_delete": false,
      "arguments": {}
    }
  ],
  "bindings": [
    {
      "source": "YOUR-QUEUE-NAME",
      "vhost": "/",
      "destination": "YOUR-QUEUE-NAME",
      "destination_type": "queue",
      "routing_key": "*",
      "arguments": {}
    }
  ]
}

Now you only need to run command: docker-compose -f docker-compose.yml up

VoilΓ ! πŸ™‚

Posted in Docker, messaging | Tagged , | Leave a comment

Acrobat Reader for Linux Mint 20.1

As of 2021 the Acrobat Reader 9 (from 2013) is still the best way to open “interactive” PDF documents on Linux! By “interactive” I mean a PDF document with a form fields to fill and some extra logic triggered by filling these fields. To be more specific, I’m referring to PDF files using “AcroForm” extention. In such file the text “AcroForm” can be found somewhere inside when opening as a plain text:

<</AcroForm 57 0 R/Extensions<</ADBE<</BaseVersion/1.7/ExtensionLevel 3>>>>/Metadata 33 0 R/Names 58 0 R/NeedsRendering true/Pages 47 0 R/Type/Catalog>>

You can read about “AcroForm” in Wikipedia or here. What is important is that these “interactive” PDFs are not XFA PDFs. This was really confusing for me, as many claim PDFs with a form to fill are certainly XFA so they should be handled by some Linux PDF readers like Evince or that LibreOffice Draw can open it as a vector graphic. And this doesn’t work at all.

It looks that “AcroForm” thing is not a common standard, but something made exlusively by Adobe company. And this perfectly explains why only a PDF reader from Adobe can handle them without problems.

Problem

The problem is sometimes one has to open and fill a form provided as “interactive” PDF (“AcroForm”). The most obvious thing to do is to follow a general advice often given by sites providing such PDF file: “use a free Adobe Reader software”. So one goes to Adobe web page and looks for Adobe Reader. The first move is to try latest version available…

The latest one is called “Adobe Reader DC” and there are rumours that it can be installed on Linux using Wine or PlayOnLinux. Unfortunately this didn’t work for me at all.

The problem here is that Adobe dropped Linux support for its Adobe Reader program just after version 9.

The solution

Use Adobe Reader 9. Use the latest available in the 9.x serie which is 9.5.5. The old way of installing this on Linux doesn’t work on Linux Mint 20.1 – so don’t try the approach I’ve presented in my post How to install Adobe Reader 9 on Ubuntu 14.04. Now the procedure is:

  1. Go to Adobe FTP server ftp://ftp.adobe.com/pub/adobe/reader/unix/9.x/9.5.5/enu/ and download the DEB file (AdbeRdr9.5.5-1_i386linux_enu.deb).
  2. Install some packages by this command (as given by this how-to, maybe not needed as GDebi installer can resolve dependencies?):
    sudo apt install gdebi-core libxml2:i386 libcanberra-gtk-module:i386 gtk2-engines-murrine:i386 libatk-adaptor:i386
    
  3. Then install the DEB file – right-click on the file and select the first menu-item (something like “install with GDebi…”).

That’s all!

Posted in Linux | Tagged , , | 2 Comments

Off-line migration of QEMU/KVM virtual machine

Assumptions

You have 2 computers, both with Linux (Linux Mint in my case). On one computer you have a QEMU/KVM virtual machine and you want to copy/move it to the other computer. On this second computer you have already QEMU/KVM installed and ready. You want to follow an off-line migration – so the virtual machine is stopped first, then recreated on the second computer. And then you can start it again on the second computer.

Solution

  1. Stop the virtual machine on the first computer.
  2. Locate the disk image file (like .qcow2 file) used by the virtual machine on the first computer. The Virtual Machine Manager program can help here. The file will be large (usually some GBs) but you need to copy it (somehow) to the second computer.
  3. Export virtual machine definion to an XML file:
    virsh dumpxml VMNAME > my_vm.xml
    
  4. Copy the XML file to the second computer. Edit the file to update path to the disk image. Search for a tag <disk>. It should contain a tag <source> having “file” attribute – you need to update its value so it points to the disk image file copied in step 2.
  5. If the VM was attached to a custom defined networks, there are some more steps – see: https://serverfault.com/questions/434064/correct-way-to-move-kvm-vm
  6. On the second computer run:
    virsh define my_vm.xml
    
  7. Run the Virtual Machine Manager on the second computer. It should show a new virtual machine that you have just imported. Run it.

In my case the first run failed with error message “the CPU is incompatible with host CPU: Host CPU does not provide required features: xop, fma4, tbm”. I’ve solved the issue by going to the tab with virtual machine details (in Virtual Machine Manager), then going to the “Processor” section and clicking checkbox “Copy host CPU configuration”:

You can find some more information here: https://documentation.suse.com/sles/15-SP1/html/SLES-all/cha-libvirt-config-gui.html#id-1.12.4.8.9.5

Then I’ve started my virtual machine again and it was fine!

Posted in Linux | Tagged , , | Leave a comment

How to install KVM/QEMU on Linux Mint 20.1

From time to time I want to run a virtual machine on my computer, a sandbox containing another operating system with some programs running in total isolation. Under Linux my answer for this need is a set of three components:

  1. KVMa virtualization module in the Linux kernel that allows the kernel to function as a hypervisor. (Wikipedia)
  2. QEMUa machine emulator and virtualizer that can perform hardware virtualization. It can cooperate with KVM to run virtual machines at near-native speed (Wikipedia)
  3. Virtual Machine Manager – a nice GUI to use above things as simply as possible.

Problem

How prepare all above components on a fresh installation of Linux Mint 20.1?

Solution

This solution is based on the article “Install KVM Virtualization on Linux Mint 20” with my additions.

First execute following commands:

sudo apt-get install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils virt-manager
sudo adduser $USER libvirt
sudo adduser $USER kvm
sudo adduser $USER libvirt-qemu

Here the above-mentioned article claimed everything is ready and working. But it wasn’t in my case. So just restart Linux now. Then proceed with verification steps:

virsh -c qemu:///system list

The output should be:

 Id   Name   State
--------------------

Then execute:

systemctl status libvirtd.service

The output should start with following lines:

● libvirtd.service - Virtualization daemon
     Loaded: loaded (/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
     Active: active (running) since Wed 2021-02-03 22:54:41 CET; 31s ago

Then start Virtual Machine Manager. It should show that it is connected to KVM/QEMU. It should look like this:

Now it’s ready.

Posted in Linux | Tagged , , , | 3 Comments

How to install Wine 6.0 on Linux Mint 20.1

From time to time one needs to run a program, that was made for Windows – like a video game my son got (“Lego the Hobbit”), but one doesn’t want to run Windows at all. Still, this is possible with Linux thanks to the Wine project.

Problem

The problem is that repositories for Linux Mint / Ubuntu tend to offer quite old (outdated) version of Wine. For Linux Mint 19 it was version 3.x or 4.0 of Wine while the stable version claimed by Wine HQ at that time was 5.0. Now the stable version of Wine is 6.0 and Linux Mint 20.1 has only Wine 5.0 in its repositories. So to have newer version one has to install it in a little more complex way… like 7 bash commands. πŸ™‚

Solution

Use these instructions, based on “Install Wine 6.0 in Ubuntu 20.04 & Linux Mint” article, but corrected and verified by myself on a fresh installation of Linux Mint 20.1 64-bit (with Cinammon). First execute following commands:

sudo apt-get install libgnutls30:i386 libldap-2.4-2:i386 libgpg-error0:i386 libxml2:i386 libasound2-plugins:i386 libsdl2-2.0-0:i386 libfreetype6:i386 libdbus-1-3:i386 libsqlite3-0:i386
sudo dpkg --add-architecture i386
wget -nc https://dl.winehq.org/wine-builds/winehq.key
sudo apt-key add winehq.key

Now there is a step which is depending on the version of Linux. The command below, with word “focal”, is for Linux Mint 20.x:

sudo apt-add-repository 'deb https://dl.winehq.org/wine-builds/ubuntu/ focal main'

For Linux Mint 19.x you need to replace “focal” by “bionic” and for Linux Mint 18.x by “xenial”. For these older Linux Mint versions (19.x, 18.x) at this step you need to install “libfaudio0”: following the instruction from WineHQ this requires to download 2 files: libfaudio0_19.07-0_bionic_i386.deb and libfaudio0_19.07-0_bionic_amd64.deb and installing them with help of GDebi.

Then execute following commands:

sudo apt update
sudo apt install --install-recommends winehq-stable

And voila! The latest stable Wine is ready. I’ve verified the procedure on 2 computers, on Linux Mint 20.1 and Linux Mint 19.3.
The result: the above-mentioned video game was installed without problems from the original DVD and we were able to play it without issues. πŸ™‚

P.S.

A month ago, in December 2020, it was precisely 10th anniversary of this blog! πŸ˜€ Wow! 10 years passed since my first blog post. Thanks for reading! πŸ™‚

Posted in Linux | Tagged , , | 5 Comments

Problem with Logitech USB headset: can’t hear voice

This is the most ridiculous headset issue I’ve ever had: with a brand new USB headset from Logitech, model 960, after some month of normal operation a problem appeared such that one can no longer hear voice, but it was OK to hear music. Yes! One could hear music but generally without hearing any voice!!!

The issue was verified with different software (web browsers, Skype, MS Teams, etc.) on different computers, with Linux and with Windows. Unconditionally: fine with music, but no voice!

As having education with some electronics I’ve started to believe a sound amplifier built into the headset got broken and stopped amplifying some part of the bandwidth. Exotic, right?

The solution

The solution was as simple (and ridiculous!) as just changing left-right balance from the default, neutral position.

All credits should go to the author of the “How To – Fix the Sound on the Logitech USB Headset!” video (published in 2016!), which is presenting how to fix this issue under MS Windows (setting one channel, left or right, to 0% and the other one to 100%).

Under Linux this usually looks different – you need to change the left/right balance slider to a position other than the center using a sound control panel like mate-volume-control:

Actually I recommend setting the slider position somewhere near the center. After this simple step the headset started to work as expected! Voice is back. One can hear music as well. I believe Logitech is to blame.

P.S.

This is published in “Linux” category but the issue itself was present on Windows as well.

Posted in hardware, Linux | Tagged , , , | Leave a comment

Special characters in a key in Spring Boot YAML file

To use some special characters in a key of a property in a YAML file (like application.yml) processed by Spring Boot 2.0 (or later) you need to use specific syntax. Key element containing β€œ/” or β€œ@” or β€œ+” (and possibly some other special characters) needs to be surrounded by square brackets. Example:

foo.bar:
  "[http://something]": value

This is needed to define property named "foo.bar.http://something" in Spring Boot 2.0 or later. According to Spring Boot issue #13404 (Allow map binding to work with unescaped characters) it was not needed in Spring Boot 1.5 (and most likely in earlier versions).

If you don’t use square brackets in such a case when using Spring Boot 2.0 or later the actual property name will be different causing things like value injection (@Value annotation) to not work.

Another Spring Boot issue, #14017 (Property binder does not allow special characters in map keys), explains that with Spring Boot 2.0 (or later) special characters are dropped silently when square brackets are not used.

It seems this feature was not document well for some time – see Spring Boot issue #13506 (Document when and how to use bracket notation when binding to a map).

P.S.
I’m not claiming it is a good idea to use such characters as “/”, “@” or “+” in property keys with Spring Boot. I recommend to avoid such design decisions as it may make it really hard to override properties using for example environment variables.

Posted in Java, Spring | Tagged , | 3 Comments

Installing PlayOnLinux under Linux Mint 19.3

Here are commands needed to install PlayOnLinux together with its main dependencies:

wget -q "http://deb.playonlinux.com/public.gpg" -O- | sudo apt-key add -
sudo wget http://deb.playonlinux.com/playonlinux_bionic.list -O /etc/apt/sources.list.d/playonlinux.list
sudo apt-get update
sudo apt-get install playonlinux
sudo apt-get install multiarch-support
sudo apt install --install-recommends wine-installer
sudo apt-get install xterm

NOTE 1: Thanks to first 2 lines we’re getting latest stable version of PlayOnLinux (which was 4.3.4 at the time of writing) instead of some older version available in the repository (some 4.2 at the time of writing).

NOTE 2: the 2nd step is specifically adjusted for Linux Mint 19.3. Please read “Ubuntu” section on PlayOnLinux Downloads page to find correct command version for your version of Linux Mint or Ubuntu.

After these steps I was able to start PlayOnLinux and use it to install “Enter the Matrix” game (it was officially made for Windows only) from original installation DVD that I’ve bought many years ago and I was able to play this game under Linux! πŸ™‚

Posted in Linux | Tagged | Leave a comment