RestTemplate and <mvc:message-converters> in Spring Framework 4.3

The Spring Framework’s documentation says that RestTemplate instance created by the default constructor gets the default set of HttpMessageConverters.

But what is the default set of HttpMessageConverters? Well, this is defined in the Spring Framework Reference. Unfortunately I’ve missed it somehow and I was believing that the default set of HttpMessageConverters is affected by <mvc:message-converters> Spring XML configuration element, like in the sample below:

    <bean id="customGsonHttpMsgConverter" class="org.springframework.http.converter.json.GsonHttpMessageConverter">
        <property name="gson">
            <bean class="CustomGsonFactoryBean"/>
            <ref bean="customGsonHttpMsgConverter"/>

    <bean id="restTemplate" class="org.springframework.web.client.RestTemplate"/>

Actually it’s not true. In the above example the RestTemplate bean will not be using this “customGsonHttpMsgConverter” bean. The default set of HttpMessageConverters for RestTemplate is… hardcoded in RestTemplate class. However one can customize it easily. This is the correct way of configuring HttpMessageConverters to be used by RestTemplate:

    <bean id="customGsonHttpMsgConverter" class="org.springframework.http.converter.json.GsonHttpMessageConverter">
        <property name="gson">
            <bean class="CustomGsonFactoryBean"/>
    <bean id="restTemplate" class="org.springframework.web.client.RestTemplate">
                <ref bean="customGsonHttpMsgConverter"/>

Lesson learned! RestTemplate has nothing to do with <mvc:message-converters> configuration.

Posted in Java, Spring | Tagged | Leave a comment

How to add a key binding to toggle a touchpad under Linux

A laptop's key toggling the touchpad My previous laptop (Asus) had a special key on its keyboard to toggle the touchpad. It was working out of the box with Linux Mint & the Mate desktop environment – there was a special support for it. My new laptop (HP) doesn’t have this special key and I was really missing it. This article is about how to add a key binding under Linux that will be behaving exactly as having such a special key (tested on Ubuntu 16.04 LTS and Cinnamon 3.4.6).

Let’s identify what is a keycode that is recognized by a desktop environment as a signal to toggle the touchpad. To discover this run the command: xmodmap -pke|grep -i touchpad
On my computer results look as follows:

$ xmodmap -pke|grep -i touchpad
keycode 199 = XF86TouchpadToggle NoSymbol XF86TouchpadToggle
keycode 200 = XF86TouchpadOn NoSymbol XF86TouchpadOn
keycode 201 = XF86TouchpadOff NoSymbol XF86TouchpadOff

So now we now keycode 199 is the one recognized as toggling the touchpad.

Now we need a program that can simulate pressing a key with such a keycode. I found this can be done by xdotool. I needed to install it with the following command:

sudo apt install xdotool

Now you can test the program with the command:

xdotool key 199

For me it worked like a charm.

Finally we need to setup a key binding that will run the above command actually toggling the touchpad. In the Cinnamon desktop environment one does this as follows (optionally you can refer the section 15. Custom Keyboard Shortcuts of How To Change The Linux Mint Cinnamon Keyboard Shortcuts):

  1. Open system settings window
  2. Click on “Keyboard” item
  3. Go to “Shortcuts”
  4. Click on “Add custom shortcut” button
  5. Enter a name for a key binding (something like “touchpad toggling”)
  6. Enter a command: xdotool key 199
  7. Click on “Add” button
  8. Now select the first “unassigned” item in “Keyboard bindings” section and click it again to activate key capturing
  9. Press a key or a key combination of your choice

I used just F5 key for this and it’s working great. I have to admit that initially I wanted a key combination of windows-key+F5 but it was not working, but this is some issue with windows-key (it’s named SUPER key under Linux) and Cinnamon.
In my opinion this case is another example of how elastic Linux and open source software are. And it’s great!

Posted in Linux | Leave a comment

Problems with Linux on a HP Notebook

This is a documentation of my tries to get a properly working Linux desktop environment on a brand new laptop: HP Notebook 15-ba006nm (P/N: 1BV18EA). This hardware contains:

This is what Linux Ubuntu 16.04.3 sees – output of lspci -Q|grep ‘VGA\|Wireless\|Display’ command:

00:01.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Carrizo (rev ca)
02:00.0 Network controller: Realtek Semiconductor Co., Ltd. RTL8723BE PCIe Wireless Network Adapter
05:00.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Topaz XT [Radeon R7 M260/M265 / M340/M360 / M440/M445] (rev ff)

With this hardware I wanted to achieve following goals:

  1. Linux to boot successfully, including desktop environment
  2. Wifi network card to work properly – this hardware causes some problems, see the ant_sel fix
  3. Cooling to work as quiet as possible – in case of this laptop is seems that one should use the GPU from the APU (Radeon R5) while keeping discrete GPU (Radeon R7) turned off
  4. Hibernation (suspend to disk) to work


Note 1: “No” in the “Wifi” column means that the Wifi adapter was working but the driver selected wrong antenna (as explained here) thus the Wifi signal reception was really poor, almost unusable.

Note 2: With 3.xx kernels one has to use radeon or fglrx graphic card driver and with 4.xx kernels one has to use amdgpu driver on this laptop. I don’t know the reason for this.

Linux version Does it boot? Wifi Graphics Hibernation How loud?
1. Linux Mint 17.3 “Rosa” – Cinnamon (64-bit)
kernel 3.19
Yes No radeon – yes, no GPU selection
fglrx – only discrete GPU
Yes radeon – too loud
fglrx – very loud
2. Linux Mint 18.1 “Serena” – Cinnamon (64-bit)
kernel 4.4.0
Yes Yes (ant_sel fix) Yes No a bit too loud
3. Linux Mint 18.2 “Sonya” – Cinnamon (64-bit)
kernel 4.8.0
No not verified not verified not verified not verified
4. OpenMadriva Lx 3.02
kernel 4.12
No not verified not verified not verified not verified
5. openSUSE Leap 42.3
kernel 4.4
Yes not verified Yes – but which GPU? No too loud
6. Fedora 26 MATE-Compiz Desktop, 64-bit
kernel 4.11
Yes, with kernel param not verified Yes – but which GPU? No too loud
7. Ubuntu 16.04.3 LTS (Desktop, 64-bit)
kernel 4.10
Yes, with kernel param Yes (ant_sel fix) Yes No almost ok


Linux Mint 17.3 “Rosa” – Cinnamon (64-bit) is LTS version. The default kernel is 3.13 and can be easily updated to 3.19. Definitely this kernel doesn’t have updated module for a Wifi adapter so it doesn’t support ant_sel fix. By default radeon driver is used for graphic card but it was hard to guess which GPU was used. Cooling was a little bit too load so I wanted to check if the AMD-provided driver will give me better results. After switching (easy, by “Drivers” window in “Settings”) to fglrx this could be established by using Catalyst Control Center window. Unfortunately this driver selected discrete GPU on board and it was causing definitely more loud cooling. Catalist Control Center allows to switch between GPUs but after selecting integrated GPU and restarting (needed after this change) the X-server could not start (I was too tired to investigate why). This is the only Linux distribution being able to hibernate the system and resume. The second advantage: no problems with booting. However lack of Wifi was not acceptable.

Possible actions

– investigate if a custom kernel module for wireless adapter for 3.xx kernel can be built and installed so ant_sel fix can be used
– investigate how to control which GPU is ued by radeon driver and if one can make sure on-board GPU is turned off


With Linux Mint 18.1 “Serena” – Cinnamon (64-bit) most things were ok. The biggest disadvantage was that hibernation was not working at all. If I recall correctly hibernation was hanging the computer with a default kernel and after upgrade of kernel to newest version of 4.4.0 line it was hanging when resuming from hibernation. Moreover I was not sure if discrete GPU (on board) was completely turned off as vgaswitcheroo presented something like “dyn-off” for its status. Cooling system seemed a little too loud for me so it pushed me to try other Linux distributions…

Possible actions

– investigate more how to fix the hibernation issue (I gave up…)
– study more details of vgaswitcheroo and if the laptop has muxed or muxless hybrid graphics (something for people with too much free time – not me)

Ad.3 and 4

Not booting! WTF?!? I was too disappointed to dig in. I haven’t seen official Linux distribution not booting for a really long time so gave them up quickly.

Possible actions

– try adding amd_iommu=off to kernel boot parameters to see if it boots


openSUSE Leap 42.3 has 2 installation variants. The first one has a size of 4.7 GB so is large and it didn’t fit on my USB stick. The second one is a network installer. It has a size around 100 MB but then… it downloads something like 3.5 GB during the installation. And all those downloaded things are not necessarily needed – annoying! Then I was hit by KDE Plasma and Yast package manager. KDE Plasma is a monster! What happened to the KDE I was using in 2001?!? And with Yast I was unable to switch to other desktop environment (maybe I got used to Synaptic too much?). If I recall correctly I checked only if hibernation works out of the box and gave up this distribution when it failed to hibernate.


Fedora 26 MATE-Compiz Desktop, 64-bit didn’t boot at the first approach. After some googling I’ve found some information about a kernel parameters to add. Unfortunately I don’t remember which one worked. Most likely this was something with acpi or something like nomodeset. Strange! After Fedora finally booted the graphics looked really poor. And hibernation was not working so I gave it up.


Ubuntu 16.04.3 LTS (Desktop, 64-bit) didn’t boot at the first time as well. This time according to error messages from booting (AMD-Vi: Completion-Wait loop timed out) I’ve found that one needs to add amd_iommu=off to kernel boot parameters. Then Ubuntu started. Of course ant_sel fix was needed to get Wifi working correctly. The graphic card driver (kernel module) used is amdgpu (as given by lspci -k command). Ubuntu shows graphics subsystem details as: Gallium 0.4 on AMD CARRIZO (DRM 3.9.0 / 4.10.0-28-generic, LLVM 4.0.0). This suggests GPU integrated with APU is used. The cooling is not so loud (comparing to other distros I’ve tested) but still it’s not silent. The worst thing is of course hibernation not working. A little bonus with Ubuntu is that the hibernation functionality is hidden from the user so you can’t hang your system by selecting “Hibernate” from a menu. Hibernation is still available for more advanced users by a command line. The additional disadvantage is Unity desktop environment (I know some people do like it) but it should be easy to replace by something else like MATE.

Possible actions

– check if AMDGPU-PRO Driver for Linux can be used with this Ubuntu version & this kernel. The hope is this driver will be more powerful to control this hybrid graphic system (however it may be not – as it was with fglrx and its Catalyst Control Center).
– dig more how to make hibernation working…


It looks the best option is 7 (Ubuntu 16.04.3), however option no 2 was close and now I probably would know how to fix booting problem with Linux Mint 18.2 (probably I’m Ubuntu/Mint biased…). It’s a total nonsense that hibernation is not working with 4.x kernels like it worked with 3.x kernels! But Wifi is a must-have.

The output of command sudo cat /sys/kernel/debug/vgaswitcheroo/switch under tested distros 2 and 7 (kernels 4.4 and 4.10) was:

1:DIS: :DynOff:0000:05:00.0

This shows that GPU integrated with APU is used what causes more silent cooling.

The HP laptop cooling is too loud most likely because of the laptop poor design, not because of Linux. However my adventure shows one can have different results with different Linux kernel/graphic drivers and it looks there is a little mess with these radeon/fglrx/amdgpu things. I’m not an expert on this so any help and advices on this appreciated!


Similar laptop and similar problem: HP Laptop AMD APU – Fail to Boot – Live linux cd

Posted in hardware, Linux | Leave a comment

Problems with Wifi signal under Linux

Detailed problem description: On a brand new laptop (HP Notebook – 15-ba006nm, P/N: 1BV18EA) with RTL8723BE PCIe Wireless Network Adapter under a quite new version of Linux (Linux Mint 18.1) with kernel 4.4.0-92-generic I’ve noticed very poor Wifi signal reception. In a distance of 1.5 meter from a router the Wifi signal strength was reported as 48%. The connectivity was breaking often requiring manual reconnection. In another room the laptop could not receive the Wifi signal at all while 2 older laptops did it without any problems.

I was suspecting a hardware problem like broken antenna of wireless network adapter when I’ve found this article: Realtek RTL8723BE PCIe Wireless Network Adapter not working in Ubuntu 16.10

This article suggested this particular Wifi hardware (RTL8723BE) has more than one antenna and that the Linux driver is not selecting the correct one. What a surprise! Then I found a blog post ArchLinux: “rtl8723be wifi connection issues solved by antenna selection” which explains how it is possible: Wifi chip supports two antennas but hardware producer connected only one antenna.


The solution was to issue the below command and reboot the computer:

sudo tee /etc/modprobe.d/rtl8723be.conf <<< "options rtl8723be ant_sel=1"

After this operation all reported problems were gone.

PS. How to check under Linux what is the Wifi network adapter?

Use lspci command and search for a line with “wireless” word.

Posted in hardware, Linux | 1 Comment

Security holes in Intel CPUs

SHORT: This is my collection of links to articles about Intel CPU/chipset security holes.

For some longer time I’m interested in security holes/flaws in PC hardware as I find such things a real nightmare from software developer point of view. What good is a perfectly written software with a state of the art security if the hardware allows to bypass it?

Last years I’ve noticed that such a security flaw is continuously present in Intel CPUs in form of Intel ME and Intel AMT technology. Please let me know if similar findings exist for AMD CPUs.

Someone may say that security problems described in articles listed below are rather related to Intel chipsets than to Intel CPUs. However nowadays you can’t (even on a desktop computer) have an Intel CPU and a non-Intel chipset on your motherboeard (in old days it was possible: SiS chipsets, NVidia chipsets, etc.). So when choosing an Intel CPU you really choose an entire Intel platform (CPU, chipset, etc.) with all these problems. Thus this all begins with an Intel CPU – so is the title of this post.

Intel Management Engine (ME) / Intel Active Management Technology (AMT)

It looks like Intel ME/AMT is a hardware backdoor present in all Intel systems (CPU+chipset) since 2008 (introduction of Nehalem cores) or even earlier on systems with vPro technology. It’s a separate computer, able to execute arbirary code, able to control all buses in the “main” computer (the one user interacts with) and it’s working when there is a power supply connected (even when the “main” computer is turned off).

  1. Intel Management Engine (ME) – Libreboot FAQ
  2. A Quest To The Core. Thoughts on present and future attacks on system core technologies by Joanna Rutkowska – an overwhelming presentation of hardware holes (mainly in Intel chipsets and CPUs) and how thay can be exploited. (2009)
  3. Why Rosyna Can’t Take A Movie Screenshot – a nice article describing what this technology (Intel ME/AMT) can do. There is a lot of related links under the article. (2015)
  4. Intel x86 considered harmful – a paper by Joanna Rutkowska being a survey of the various problems and attacks presented against the x86 platform over the last 10 years. (2015)
  5. Intel x86s hide another CPU that can take over your machine (you can’t audit it), (2016)
  6. Intel AMT Vulnerability Shows Intel’s Management Engine Can Be Dangerous – Intel published a security advisory about a vulnerability in Intel ME/AMT. (2017)
  7. CVE-2017-5689“An authentication bypass vulnerability affecting just about every Intel server with AMT, ISM or Intel Small Business technology enabled, allowing unprivileged network attackers to gain system privileges (where AMT has been provisioned). This is notable because AMT provides the possibility to remotely control a computer even if when powered off. Packets sent to ports 16992 or 16993 are redirected through Intel’s Management Engine (a small, separate processor independent of the main CPU) and passed to AMT. Patch rollouts are expected to be slow, as while it is Intel’s responsibility to develop the patches (which it has done), it is not Intel’s responsibility to deliver them. That’s down to the device manufacturers and OEMs; and it is generally thought that not all will do so.” (2017)
  8. How to Hack a Turned-Off Computer, or Running Unsigned Code in Intel Management Engine – announcement of a presentation on the Black Hat Europe 2017 conference

Intel Processor Trace (PT)

  1. CyberArk: Windows 10 Vulnerable To Rootkits Via Intel’s Processor Trace Functionality, (2017)

Intel System Management Mode (SMM)

SMM was originally introduced by Intel so we can call it Intel technology. However it’s present in AMD CPUs as well.

  1. Most Intel x86 Chips Have a Security Flaw, (2015)
  2. SMM problems – summary on Wikipedia


This time Intel’s implementation of a particular x86 instruction was worse that the one found in AMD CPUs.

  1. The Intel SYSRET privilege escalation, (2012)
Posted in BIOS, hardware, security | 2 Comments

Google App Engine, Java, JPA 2, Spring Framework, Maven

DISCLAIMER: This was going (in 2014!) to be a full tutorial but at some point I’ve lost motivation in digging in the Google App Engine. Actually I was so tired with this GAE data store that I felt really relieved when returned to using a classic relational database. The draft of this article was hanging for couple of years waiting to be completed. Now I’ve finally realized it will never be. As far as I recall the main problem with Google App Engine data store was related to using transactions. Probably things behave a little bit better if you… are not using transations. So be warned:

  • I publish this as a DRAFT just in case someone will find it usefull.
  • The state of knowledge about GAE here is somewhere in 2014

This tries to present how to create a Maven managed project of a Java web application to be deployed on Google App Engine and making use of Spring Framework and JPA 2 as a persistence layer. It shows how to store related entities (@OneToMany) in a transaction (key complication factor) and how to mark JPA entity as a child entity in sense of Google App Engine storage. The tutorial assumes some basic knowledge of Java, Maven, Spring Framework and JPA. I used following software versions:

  1. Google App Engine SDK, ver. 1.9.0
  2. Java SDK ver. 1.7, as required by the above version of Google App Engine
  3. JPA 2.0
  4. Spring Framework ver. 3.2.8
  5. Maven ver. 3.2.1

Create a project skeleton

Go to the command line and move to a directory where you want to create a subdirectory with new project. To create a project skeleton enter the command:

mvn archetype:generate

This will run interactive mode for generating new project consisting of following steps:

  1. Enter number corresponding to item “”. It was 56 in my case.
  2. Enter number corresponding to newest available version of Google App Engine. It was 2 in my case corresponding to version 1.7.5.
  3. Enter value for “groupId” of your project
  4. Enter value for “artifactId” of your project – this will be used as a directory name that will be created for your project
  5. Enter value for “version” of your project
  6. Enter name of one of Java packages you’re going to create in your project
  7. Enter “Y” to confirm

Now the project skeleton is ready. Enter the directory with the same name as your “artifactId” and see what’s there. You can remove directory “eclipse-launch-profiles” and files: LICENSE, nbactions.xml, You need only file “pom.xml” and directory “src”. You should get a similar structure:

Initial files and directories

Configure project dependecies

Adjust version of Google App Engine

We would like to use ver. 1.9.0 of GAE instead of 1.7.5 that was provided by Maven, so open file pom.xml and change the tag <> to:


JPA 2.0

By default Google App Engine is supporting JPA 1.0 as one of Java persistence technologies that allows to avoid tightly coupling with Google App Engine storage (some kind of no-SQL database). To use JPA 2.0 we need:

  1. JPA 2.0 API dependency
  2. Datanucleus provider of JPA 2.0 in version corresponding to our version of Google App Engine SDK
  3. Datanucleus for Google App Engine in version corresponding to our version of Google App Engine SDK
  4. Configure Datanucleus plugin for Maven that will process entity classes enhancement, as required by Datanucleus JPA implementation


There are 2 important things related to persistence.xml file:

  • proper location of the file in the project, which should be:
    I checked that this location works though GAE documentation gives different location
  • name of persistence-unit (“appengine-persistence-unit” in my example), which we’re going to refer from pom.xml – the name itself is arbitrarily chosen

The skeleton of persistence.xml file in my approach is following:

<?xml version="1.0" encoding="UTF-8"?>
<persistence xmlns=""

	<persistence-unit name="appengine-persistence-unit">
<!-- You will list your entity classes here:
			<property name="datanucleus.NontransactionalRead" value="true"/>
			<property name="datanucleus.NontransactionalWrite" value="true"/>
			<property name="datanucleus.ConnectionURL" value="appengine"/>
			<property name="datanucleus.jpa.addClassTransformer" value="false"/>
			<property name="datanucleus.appengine.datastoreEnableXGTransactions" value="true"/>

Please note following:

  • Contrary to example from GAE documentation I referred XML schema for JPA 2.0 and denoted it in “version” parameter of tag <persistence>
  • I chose to list entity classes what allows later to precisely control which classes will be processed by Datanuclues enhancer
  • I set “datanucleus.jpa.addClassTransformer” property to “false” what is important as classes in our project will be enhanced during build time. Without this there was a runtime error.
  • I set “datanucleus.appengine.datastoreEnableXGTransactions” property to “true”, so we’re be less limited by Google App Engine storage specific features. Without this a single transaction can access only entities belonging to a single, so called, “entity group”.

Other properties are standard for Google App Engine and JPA 2.


Let’s define version numbers for Datanucleus implementation of JPA 2 and Datanucleus for GAE by adding below code inside <properties> tag in pom.xml:


These version numbers are established by downloading Google App Engine SDK for Java and examining a content of directory lib/opt/user/datanucleus/v2 in the archive.

Next add following code inside tag <dependencies> in pom.xml:


I used JPA 2 API library from org.eclipse.persistence instead of library from org.apache.geronimo.specs by my arbitrary choice. However GAE SDK provides the latter one.

And finally we set up a Maven plugin performing Datanucleus enhancement of JPA entity classes by inserting following code into content of <plugins> tag in pom.xml (I have inserted it as a second plugin):


Please note that the plugin refers to our persistence unit name thus the plugin will only enhance entity classes pointed by the persistence unit definition.

Spring Framework

From Spring Framework we need:

  1. spring-context – needed for Spring beans configuration, especially based on annotations
  2. spring-web – needed for Spring beans context initialization in web application
  3. spring-tx – needed for Spring based database transactions
  4. spring-orm – needed for creation of beans related to JPA like entity manager factory

So let’s define a common version number for Spring libraries adding below code to <properties> tag of pom.xml:


And let’s insert following code to content of <dependencies> tag in pom.xml:


Write code!

Let’s say in our example application we would like to store foreign currency exchange rates. Each record consists of: 1st currency, 2nd currency, exchange rate from 1st currency to 2nd currency. Each day we would like to store plenty of these records (for each pair of currencies there will be 2 exchange rates: from the 1st currency to the 2nd and from the 2nd to the 1st – they’re aren’t equivalent).

Entity classes

In Google App Engine storage system entities are related in sense of parent-child relation. Parent entity must be stored first. All entities having the same ancestor belong to the same entity group. Plain transactions can work only on entities from a single group. Cross-group (XG) transactions can work on up to 5 entity groups. That’s why I set “datanucleus.appengine.datastoreEnableXGTransactions” property to “true” in persistence.xml.

If we design only a single entity, than we will be limited to store only 5 exchange rates in a single transaction. The Google App Engine specific features forces us to introduce a parent entity. At beginning it looks weird. But often you can find some natural parent entity. And sometimes not. Here is an example of such root-entity (parent entity):

public class FxSources {
	@GeneratedValue(strategy = GenerationType.IDENTITY)
	private Long id;
	@Column(nullable = false)	
	private Date createdDate;
	@OneToMany(cascade = CascadeType.ALL, mappedBy = "parent")	
	private List<FxSource> sources;
	public Long getId() {
		return id;

	public Date getCreatedDate() {
		return createdDate;

	public List<FxSource> getSources() {
		if (sources == null) {
			sources = new ArrayList<FxSource>();
		return sources;

	public void setCreatedDate(Date createdDate) {
		this.createdDate = createdDate;

And below is an example of a domain entity (child entity). You want to operate with many such entities in a single transaction, so it must be child of some other entity.

public class FxSource {
	@GeneratedValue(strategy = GenerationType.IDENTITY)
	@Extension(key="gae.encoded-pk", value="true", vendorName="datanucleus")
	private String id;
	@Column(nullable = false)
	private String description;
	@ManyToOne(fetch = FetchType.LAZY, optional = false)
	private FxSources parent;
	public FxSource() {
	public FxSource(String description, FxSources parent) {
		this.description = description;
		this.parent = parent;
	public String getId() {
		return id;

	public String getDescription() {
		return description;

	public void setDescription(String description) {
		this.description = description;
	public String toString() {
		if (id == null) {
			return description;
		else {
			return description + " [id=" + id + ']';

Remember this solution will limit you to operate only at most on 5 root-entities at once. So it can happen you will need to introduce another root-entity that will be parent of the entity that was going to be root-entity at the beginning. In general, for Google App Engine and JPA, you have to design your root parent entity such a way there will be at most couple of such records in your problem domain.

IMPORTANT: it looks from GAE documentation that if one wants to have custom primary keys (not auto generated primary keys) then the only option is to use String as a type for @Id field. At first approach I used Integer type for field year but the code failed on searching (EntityManager.find). Switching to String solved the problem.

Below is another entity class acting as a child of one entity (FxDay) and as parent for other entity (FxRate).

public class FxRatesPack {
	@GeneratedValue(strategy = GenerationType.IDENTITY)
	@Extension(key="gae.encoded-pk", value="true", vendorName="datanucleus")
	private String id;	
	@Column(nullable = false)
	private String sourceId;
	 * Number representing time. Format: HHMMSS
	@Column(nullable = false, length = 8)
	private int downloadTime;
	@ManyToOne(fetch = FetchType.LAZY, optional = false)
	private FxDay day;
	@OneToMany(cascade = CascadeType.ALL, mappedBy = "pack")
	private Set<FxRate> fxRates = new TreeSet<FxRate>();

	public String getId() {
		return id;

	public String getSourceId() {
		return sourceId;

	public void setSourceId(String sourceId) {
		this.sourceId = sourceId;

	public int getDownloadTime() {
		return downloadTime;

	public void setDownloadTime(int time) {
		this.downloadTime = time;

	public Set<FxRate> getFxRates() {
		return fxRates;

	public void setFxRates(Set<FxRate> fxRates) {
		this.fxRates = fxRates;

	public FxDay getDay() {
		return day;

	public void setDay(FxDay day) { = day;

And below is our main entity class:

public class FxRate implements Comparable<FxRate> {
	@GeneratedValue(strategy = GenerationType.IDENTITY)
	@Extension(key="gae.encoded-pk", value="true", vendorName="datanucleus")
	private String id;	
	@Column(nullable = false)
	private CcyCode fromCcy;
	@Column(nullable = false)
	private CcyCode toCcy;
	@Column(nullable = false)
	private FxType type;
	@Column(nullable = false)
	private BigDecimal fx;
	@ManyToOne(fetch = FetchType.LAZY, optional = false)
	private FxRatesPack pack;
	public FxRate() {
	public FxRate(CcyCode fromCcy, CcyCode toCcy, FxType type, BigDecimal fx) {
		if(fromCcy == null || toCcy == null || type == null || fx == null) {
			throw new IllegalArgumentException("All FX data must be non-null");
		this.fromCcy = fromCcy;
		this.toCcy = toCcy;
		this.type = type;
		this.fx = fx;
	public CcyCode getFromCcy() {
		return fromCcy;
	public CcyCode getToCcy() {
		return toCcy;
	public BigDecimal getFx() {
		return fx;
	public void setFx(BigDecimal rate) {
		fx = rate;

	public FxType getType() {
		return type;

	public FxRatesPack getPack() {
		return pack;

	public void setPack(FxRatesPack pack) {
		this.pack = pack;

	public String toString() {
		return fromCcy + " -> " + toCcy + ": " + ((type == FxType.INVERTED) ? "1/" : "") + fx;
	public boolean equals(Object o) {
		if(o instanceof FxRate) {
			FxRate x = (FxRate)o;
			return fromCcy.equals(x.fromCcy) && toCcy.equals(x.toCcy) && type == x.type;
		else {
			return false;

	public int hashCode() {
		return fromCcy.hashCode() + toCcy.hashCode() * 13 + type.hashCode() * 17;

	public int compareTo(FxRate x) {
		if(x == null) {
			return 1;
		int compResult = fromCcy.compareTo(x.fromCcy);
		if(compResult != 0) {
			return compResult;
		compResult = toCcy.compareTo(x.toCcy);
		if(compResult != 0) {
			return compResult;
		compResult = type.compareTo(x.type);
		if(compResult != 0) {
			return compResult;
		return 0;

IMPORTANT: a child entity class for JPA in Google App Engine must have a special primary key. One of available options is to use field of type String annotated with JPA annotations for primary key and one GAE-specific annotation: @org.datanucleus.api.jpa.annotations.Extension(key=”gae.encoded-pk”, value=”true”, vendorName=”datanucleus”)

Now, let’s complete the persistence.xml file by listing all entity classes (class name with package) inside <persistence-unit> tag. Only classes listed will be enhanced by the Datanucleus plugin.

Spring configuration

Let’s create applicationContext.xml file in src/main/webapp/WEB-INF directory. The content of the file:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns=""
	<context:component-scan base-package="kt.samples"/>
	<tx:annotation-driven transaction-manager="transactionManager"/>
	<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
		<constructor-arg ref="entityManagerFactory"/>
	<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
		<property name="persistenceUnitName" value="appengine-persistence-unit"/>
	<bean class="org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor"/>

Thanks to this configuration we will have:
– all annotated beans in package kt.samples detected and initialized in Spring context
– JPA entity manager support by injecting EntityManager with @PersistenceContext annotation in a Spring bean
– transactions support based on Spring annotations
– database exceptions translation to Spring provided classes from DataAccessException hierarchy

Next let’s setup creation of Spring application context like in a normal web application. For this edit src/main/webapp/WEB-INF/web.xml file and insert below code inside <web-app> tags:


Repository class sample

public class FxRepository {
	EntityManager entityManager;
	private FxSources findOrCreateFxSourceRoot(Date timestamp) {
		TypedQuery<FxSources> query = entityManager.createQuery(
				"select s from " + FxSources.class.getName() + " s", FxSources.class);
		List<FxSources> result = query.getResultList();
		FxSources fxSources;
		if (result.isEmpty()) {
			fxSources = new FxSources();
		else {
			fxSources = result.get(0);
		return fxSources;
	public void store(Set<FxRate> fxRates, String sourceDesc, Date fxDate) {
		FxSource existingFxSrc = findFxSourceByDesc(sourceDesc);
		if (existingFxSrc == null) {
			FxSources parent = findOrCreateFxSourceRoot(fxDate);
			existingFxSrc = new FxSource(sourceDesc, parent);
		// GAE specific: "parent" entity must be persisted first...
		final String dayDate = DateUtil.formatDate(fxDate);
		FxDay existingDay = findFxDay(dayDate);
		if (existingDay == null) {
			existingDay = new FxDay();
		final int fxDateAsInt = DateUtil.timeAsInt(fxDate);
		// check if data were stored already
		if (!isFxRatesPackStored(existingFxSrc, fxDateAsInt)) {
			// GAE specific: ...then child entities are persisted
			FxRatesPack pack = new FxRatesPack();

			for (FxRate fxRate : fxRates) {
	public FxSource findFxSourceByDesc(String desc) {
		TypedQuery<FxSource> query = entityManager.createQuery(
				"select s from " + FxSource.class.getName() + " s where s.description = :desc",
		query.setParameter("desc", desc);
		return JpaUtil.singleResult(query, "description", desc);
	public List<FxRatesPack> getAllFxRatesPacks() {
		TypedQuery<FxRatesPack> query = entityManager.createQuery(
				"select p from " + FxRatesPack.class.getName() + " p order by p.downloadTime, p.sourceId",
		return query.getResultList();
	public FxDay findFxDay(String dayDate) {
		return entityManager.find(FxDay.class, dayDate);

	public List<FxSource> getFxSources(Set<String> sourceIds) {
		StringBuilder queryTxt = new StringBuilder();
		queryTxt.append("select s from ");
		queryTxt.append(" s where");
		appendPlaceholders(queryTxt, sourceIds.size());

		TypedQuery<FxSource> query = entityManager.createQuery(queryTxt.toString(), FxSource.class);
		Iterator<String> iter = sourceIds.iterator();
		int i = 1;
		while (iter.hasNext()) {
		return query.getResultList();
	private static void appendPlaceholders(StringBuilder queryTxt, int param_count) {
		for (int i = 0; i < param_count; ++i) {
			if (i > 0) {
				queryTxt.append(" or");
			queryTxt.append(" = ?").append(i + 1);

	private boolean isFxRatesPackStored(FxSource existingFxSrc, int fxDateAsInt) {

		TypedQuery<FxRatesPack> query = entityManager.createQuery(
				"select p from " + FxRatesPack.class.getName() + " p where p.downloadTime = :time and p.sourceId = :srcId",
		query.setParameter("time", fxDateAsInt);
		query.setParameter("srcId", existingFxSrc.getId());
		return query.getResultList().isEmpty() == false;

(here the draft ends, so that’s all folks)

Posted in GoogleAppEngine, Java, Spring | Tagged , , | 1 Comment

RSA signatures in Java with Bouncy Castle

This is a complete guide, starting from RSA key pair generation into PEM files, through loading private/public keys from files into proper Bouncy Castle objects, to digital signature creation and verification – all using Bouncy Castle. No JCA.

How to generate RSA private and public keys ready for Java?

I know 3 programs that can generate RSA keys:

The first tool is intended only for Java world thus it is less universal. Moreover it’s harder with keytool to prepare a public key and private key in separate files. And I really don’t like to consider if there are any RSA key size limits in JDK. So I resigned from it.

After some research I think one should use openssl instead of ssh-keygen. The first one allows to write files with public and private keys in a format that can be loaded by a Java program. And it generates as good keys as the second one as both tools are based on Open SSL library.

Using openssl

Below bash commands lead to obtain 2 separate files with 4096-bit RSA key pair: one file with private key and one file with public key. Both files in PEM format which is a plain text. I’ve decided to use PEM format as it’s quite convenient and universal.

openssl genrsa -out priv-key.pem 4096
openssl rsa -in priv-key.pem -pubout -outform PEM -out pub-key.pem

After executing above commands we have an RSA public key in pub-key.pem file and the related RSA private key in priv-key.pem file.

How to create and check RSA signature in Java?

We will use Bouncy Castle cryptographic library. My experience is that JCA can be dissapointing (example: key strengh limits). To use Bouncy Castle we need these dependencies (Maven):


The second dependency is for class org.bouncycastle.openssl.PEMParser which we will use to load private and public keys from generated PEM files.

Loading RSA keys from PEM files

First we need to be able to load RSA private or public key from a disk file into a Java object of a proper class from Bouncy Castle. For this task I propose a following Java code:

import org.apache.commons.lang3.Validate;
import org.bouncycastle.asn1.pkcs.PrivateKeyInfo;
import org.bouncycastle.asn1.x509.SubjectPublicKeyInfo;
import org.bouncycastle.crypto.params.AsymmetricKeyParameter;
import org.bouncycastle.crypto.util.PrivateKeyFactory;
import org.bouncycastle.crypto.util.PublicKeyFactory;
import org.bouncycastle.openssl.PEMKeyPair;
import org.bouncycastle.openssl.PEMParser;

public class KeyUtil {
	public static AsymmetricKeyParameter loadPublicKey(InputStream is) {
		SubjectPublicKeyInfo spki = (SubjectPublicKeyInfo) readPemObject(is);
		try {
			return PublicKeyFactory.createKey(spki);
		} catch (IOException ex) {
			throw new RuntimeException("Cannot create public key object based on input data", ex);
	public static AsymmetricKeyParameter loadPrivateKey(InputStream is) {
		PEMKeyPair keyPair = (PEMKeyPair) readPemObject(is);
		PrivateKeyInfo pki = keyPair.getPrivateKeyInfo();
		try {
			return PrivateKeyFactory.createKey(pki);
		} catch (IOException ex) {
			throw new RuntimeException("Cannot create private key object based on input data", ex);

	private static Object readPemObject(InputStream is) {
		try {
			Validate.notNull(is, "Input data stream cannot be null");
			InputStreamReader isr = new InputStreamReader(is, "UTF-8");
			PEMParser pemParser = new PEMParser(isr);
			Object obj = pemParser.readObject();
			if (obj == null) {
				throw new Exception("No PEM object found");
			return obj;
		} catch (Throwable ex) {
			throw new RuntimeException("Cannot read PEM object from input data", ex);

Actual loading of a private or a public RSA key with use of KeyUtil class looks like this:

// load a public key from the file 'pub-key.pem'
InputStream pubKeyInpStream = new FileInputStream(new File("pub-key.pem"));
AsymmetricKeyParameter publKey = KeyUtil.loadPublicKey(pubKeyInpStream);

// load a private key from the file 'priv-key.pem'
InputStream prvKeyInpStream = new FileInputStream(new File("priv-key.pem"));
AsymmetricKeyParameter privKey = KeyUtil.loadPrivateKey(prvKeyInpStream);


RSA digital signature creation and verification

Basically the creation of a digital signature is an encryption of a message digest with use of a private key. The verification consists of decrypting a signature with use of a public key and comparing it to a message digest calculated again from the message.

So one new thing to do is a calculation of a message digest. Here one can choose from couple of different digest algorithms. Bouncy Castle support many of them (please note to choose a digest algorithm that is considered secure). Let’s stick to SHA512 for code samples. This algorithm is provided by org.bouncycastle.crypto.digests.SHA512Digest class.

Another thing to consider is that a “message” may be anything one wants. However before calculating its digest it must be first converted to a byte array. For text messages one can use String.getBytes(Charset). I will just assume our message is in byte array form already.

RSA signing and signature verification is handled by a single class in Bouncy Castle – org.bouncycastle.crypto.signers.RSADigestSigner. Now we have all ingredients!

Creation of an RSA digital signature:

// GIVEN: InputStream prvKeyInpStream
AsymmetricKeyParameter privKey = KeyUtil.loadPrivateKey(prvKeyInpStream);

// GIVEN: byte[] messageBytes = ...
RSADigestSigner signer = new RSADigestSigner(new SHA512Digest());
signer.init(true, privKey);
signer.update(messageBytes, 0, messageBytes.length);

try {
    byte[] signature = signer.generateSignature();
} catch (Exception ex) {
    throw new RuntimeException("Cannot generate RSA signature. " + ex.getMessage(), ex);


Verification of an RSA digital signature:

Please note that now the first parameter of RSADigestSigner.init(..) method call is false.

// GIVEN: InputStream pubKeyInpStream
AsymmetricKeyParameter publKey = KeyUtil.loadPublicKey(pubKeyInpStream);

// GIVEN: byte[] messageBytes
RSADigestSigner signer = new RSADigestSigner(new SHA512Digest());
signer.init(false, publKey);
signer.update(messageBytes, 0, messageBytes.length);

// GIVEN: byte[] signature - see code sample above
boolean isValidSignature = signer.verifySignature(signature);


Here is the real life example of signature verification code:

Please leave your comment if you know that something should be improved here to increase signature security.

Posted in Java, security | Tagged , | Leave a comment