ZCash on Ubuntu & Debian

ZCash is the newest crypto-currency on the block. As of this writing the project is at version 1.0.1 of its client. And, the project seems Debian/Ubuntu friendly as the project maintainers have been pushing the official client in pre-built deb packages for general consumption. Very nice. Thank you z.cash project!

There seems to be a lot of help/instructions on the web. I’ve tried to summarize what I have found at various locations into this recipe below. But, https://z.cash should be considered the final authority on setup and configuration.

This write up intends to provide quick and dirty instructions on:

  1. Installing the packages
  2. Configuring ZCash
  3. Controlling daemon and mining behavior with user level crontab entries

I make the following assumptions:

  1. You have the ability to elevate to root
  2. You are OK with running the ZCash node all the time
  3. You would like to mine coins in the background in your off hours
  4. You would like to participate in a mining pool
  5. You do not want to mine coins during your work hours
  6. You will be running the ZCashd daemon as a regular user
  7. You don’t want to bother with a init.d or systemd script for the moment

At the end of the recipe I pose a few questions. Please comment or add to the discussion if you wish. I hope in advance you find this write-up useful so let’s begin!

Elevate to Root and Install:

$ sudo -s
$ sudo apt-get install apt-transport-https
$ wget -qO – https://apt.z.cash/zcash.asc | sudo apt-key add –
$ echo “deb [arch=amd64] https://apt.z.cash/ jessie main” | sudo tee /etc/apt/sources.list.d/zcash.list
$ apt-get update && sudo apt-get install zcash

Become a regular user and complete the configuration:

Return to regular user after sudo tasks above…
$
exit

Prime ZCash (keys and zkSnark stuff)…
$ zcash-fetch-params

Create configuration directory…
$ mkdir -p ~/.zcash

Add entries to the configuration file…
$ echo “addnode=mainnet.z.cash” >~/.zcash/zcash.conf
$ echo “rpcuser=username” >>~/.zcash/zcash.conf
$ echo “rpcpassword=head -c 32 /dev/urandom | base64” >>~/.zcash/zcash.conf
$ echo ‘equihashsolver=tromp’ >> ~/.zcash/zcash.conf
$ echo ‘#gen=1’ >> ~/.zcash/zcash.conf
$ echo “#genproclimit=$(nproc)” >> ~/.zcash/zcash.conf

Note: the last two entries above are going into the zcash.conf file commented out. This is OK as we will set these parameters in the crontab entries below. You may decide you want to just run ZCash without my crontab entries so you’ll be able to just uncomment those configuration lines.

Create an address (You will use this instead of my address in the crontab entries below when mining coins and will add the suffix rig1, rig2, etc for each machine where you are mining coin.)…
$ zcash-cli z_getnewaddress
$ zcash-cli z_listaddresses

Make you user level crontab entries…
$ crontab -e

# Start the ZCash daemon on reboot using only one (1) core for mining
@reboot /usr/bin/zcashd -reindex -showmetrics=0 -gen=1 -genproclimit=1 -stratum=stratum+tcp://us1-zcash.flypool.org:3333 -user=t1PBheBAnP8iAPjCWbmPsrgyjW7mWpKv4Ct.rig1

# Kill the ZCash daemon running on one (1) core @ 5:25 PM — E.g After Work
25 17   * * *   /usr/bin/pkill zcashd
# Start the ZCash daemon using more processing power (8 cores instead of 1) — E.g After Work
30 17   * * *   /usr/bin/zcashd -reindex -showmetrics=0 -gen=1 -genproclimit=8 -stratum=stratum+tcp://us1-zcash.flypool.org:3333 -user=t1PBheBAnP8iAPjCWbmPsrgyjW7mWpKv4Ct.rig1

# Kill the ZCash daemon using many cores @ 7:25 AM — E.g. Before Work
25 07   * * *   /usr/bin/pkill zcashd
# Start the ZCash @ 7:30 AM using only one (1) core for mining — E.g. Before Work
30 07   * * *   /usr/bin/zcashd -reindex -showmetrics=0 -gen=1 -genproclimit=1 -stratum=stratum+tcp://us1-zcash.flypool.org:3333 -user=t1PBheBAnP8iAPjCWbmPsrgyjW7mWpKv4Ct.rig1

Note:  flypool.org is just the first mining pool I found that didn’t require an account so I have been using it for testing. Please leave a comment below if you know of a good one and wish to make a recommendation. In addition I have seen that there are other mining packages out there that can be compiled and installed with little trouble but for now I’m content to try out the base client for pool mining.

Note: genproclimit is just how many cores you wish to use when mining. At the command line issue the “nproc” command to learn/confirm how many cores you have in your system. Adjust the genproclimit in the crontab entries to suit your needs.

Note: Simply using zcashd -daemon isn’t a bad option either if your not worried about system responsiveness. Your crontab -e entry would just be @reboot /usr/bin/zcashd -daemon just make sure you uncomment the gen and genproclimit lines in ~/.zcash/zcash.conf

Items I’m curious about:

  1. What is the ETA on the complete/definitive manual?
  2. ETA on “official” Android/iOS app?
  3. Is it good practice to use a Tor proxy or is this overkill?
  4. Can all of the command line options found in “zcashd –help” be used in the ~/.zcash/zcash.conf file?
  5. Will there be “official” GUIs from the project to administer:
    1. mining (including pool)?,
    2. wallet(s)?
    3. transaction processing?
      1. If so I hope there it is implemented in KDE or Java SWING so that it behaves the same on Windows/Linux/Mac etc.
      2. Minimize to system tray on KDE Plasma/Gnome/Windows/Etc.
  6. Will there be an “official” z.cash mining pool hosted at z.cash?
    1. A mining contribution going back to the project might be nice.
  7. Will the daemon get some usability functionality? For instance…
    1. Mine when processor is available?
    2. Or, “nice” settings in the zcash.conf file?
    3. Or, more sophisticated date/time or sleep controls than afforded by my crontab approach above?
    4. Again, would be great if all this could be controlled and monitored via an “official” GUI
  8. Will the “official” client get GPU mining capabilities to take advantage of graphics cards?
    1. As mentioned above I have noticed some alternate mining software is becoming available but haven’t tried them as of yet.
  9. There are a number of “Crypto-Currency Exchanges” so will one or more of these be designated as “definitive” by the project?
    1. I’ve noticed that http://kraken.com has already added zcash trading
      1. Not a plug. Just an observation that it was fast considering the project has only been “live” for a short time.
  10. What will the peer review look like on the new currency?
  11. How much peer review does the project require to confirm that the currency is secure and anonymous?

Finally, thank you to the members of the Z.Cash project for the new Crypto-Currency! Please check them out directly at: https:/z.cash

PostgreSQL 9.5 Streaming Cluster on Ubuntu 16.04

It can be very useful to have PostgreSQL configured in a cluster. PostgreSQL is able to do a few different kinds of clustering right “out of the box.” But, each of the ones that are available out of the box are of the MASTER-SLAVE scenario. Meaning that the first server handles all the writes and reads (the MASTER) and a second server (the SLAVE) gets a COPY of everything that is written to the MASTER.

PostgreSQL offers a few primary different ways for the Db to achieve this — log shipping, binary and streaming.

But, why is this useful? There are a few reasons. First, if your MASTER server goes down you can promote the SLAVE to be the new MASTER and get your database dependent application back on line quickly with little chance of data loss. Second, if you have some very long or intensive queries these can be run against the SLAVE leaving the MASTER to be more responsive (reduced load) for other write activity in your application. Third, boring but necessary I/O intensive administrative tasks like backups can be run against the SLAVE again taking load off the MASTER.

There do appear to be some downsides to the “out of the box” solutions in PostgreSQL. First, there does not appear to be an automated fail-over mechanism meaning that you will have to take a few steps to ensure that your MASTER is truly down and then promote your SLAVE to be the new MASTER. (I’m not 100% on that point.) And, in the MASTER-SLAVE cluster your application cannot WRITE to the SLAVE!

This kind of cluster where your application and read and write to any database server in the cluster and the servers all stay in sync is called a MASTER-MASTER cluster and doesn’t appear to be an “out of the box” option in PostgreSQL 9.5. But, that being said there appear to be many options out there to address this needs. If not true MASTER-MASTER at least intelligent load balancing and/or fail over. Perhaps even too many other options!?!?

The application I am working on now could make good use of a true MASTER-MASTER scenario; the servers working together without being fronted by some balancer solution. But, as I said above there are many approaches to fail-over and MASTER-MASTER cluster configurations so when I have a chance to investigate the current options out there and choose one for my production application I’ll write up what I found. Also, if you follow the PostgreSQL project much it seems this project moves at light speed. I wouldn’t be surprised if MASTER-MASTER is built in sometime soon. I don’t know that but again these guys/gals seem to make useful and solid improvements on this project VERY quickly.

Enough discussion lets get to work!
The run through below is an example of a scenario with one MASTER and 2 SLAVES.

I have 3 servers:
MASTER — Gold (192.168.100.100) –> WRITE new data
SLAVE1 — Silver  (192.168.100.101) –> Large and or intensive READ queries
SLAVE2 — Bronze (192.168.100.102) –> Backups routines and well I’ve got room for a third so why not.

A few assumptions:
You are running Ubuntu Server 16.04 (Latest stable as of this blog post)
You are familiar with the command line and editing files. I us vi below but use any editor you like.
You have a little patience…

On the MASTER:
Let’s get PostgreSQL installed…

sudo -s (Enter your password when prompted)
apt-get install aptitude
aptitude update
aptitude install –with-recommends postgresql postgis

su postgres
psql
CREATE ROLE replication WITH REPLICATION PASSWORD ‘YourSecretPassword’ LOGIN;
\q
exit

cd /etc/postgresql/9.5/main

vi pg_hba.conf and append the line:
host    all     all     192.168.0.0     255.255.0.0     md5
host    replication replication 192.168.100.100/16 md5 #Gold
host    replication replication 192.168.100.101/16 md5 #Silver
host    replication replication 192.168.100.102/16 md5 #Bronze

Note: In the above entries I use /16 and 255.255.0.0 for the netmask part of my entires. Of course adjust the IP scheme and netmask to match your network and server entries. Also, I put all my cluster members in the pg_hba.conf file on each cluster member just so I don’t have to worry about it as I play with promoting and demoting members to/from MASTER/SLAVE status.

vi postgresql.conf find the following entries, uncomment and edit:
listen_addresses = ‘*’
wal_level = hot_standby
max_wal_senders = 10 # Probably overkill as I only have a few slave servers but may add more
wal_keep_segments = 2048 # Probably _WAY_ overkill but as I would’t expect my SLAVE to be down that long. But, I’ve got plenty of disk space so…

service postgresql restart; tail -f /var/log/postgresql/postgresql-9.5-main.log

Watch the logs for a bit to make sure postgresql came back up OK.
Control-C to stop tailing the log

Loosen security for just a little while…

We need to make a couple of changes in security that we will UNDO later in these instructions. These temporary changes in security are needed as we will need to ‘prime’ the SLAVE in a future step to make sure it is ready to go.

Reconfigure SSH…

vi /etc/ssh/sshd_config
change the line:
PermitRootLogin prohibit-password
to:
PermitRootLogin yes

service restart ssh

Enable root login and password…
passwd root
passwd -u root

Again, don’t worry after we ‘prime’ the SLAVE we are going to lock the root account back down.

Ok, so the MASTER is now good to go.  Let’s work on our SLAVE.

On the SLAVE (Both of them if you decide you need two as well):
Let’s get PostgreSQL installed…

sudo -s (Enter your password when prompted)
apt-get install aptitude
aptitude update
aptitude install –with-recommends postgresql postgis

cd /etc/postgresql/9.5/main

vi pg_hba.conf and append the line:
host    all     all     192.168.0.0     255.255.0.0     md5
host    replication replication 192.168.100.100/16 md5 #Gold
host    replication replication 192.168.100.101/16 md5 #Silver
host    replication replication 192.168.100.102/16 md5 #Bronze

Note: In the above entries I use /16 and 255.255.0.0 for the netmask part of my entires. Of course adjust the IP scheme and netmask to match your network and server entries. Also, I put all my cluster members in the pg_hba.conf file on each cluster member just so I don’t have to worry about it as I play with promoting and demoting members to/from MASTER/SLAVE status.

cd /etc/postgresql/9.5/main

vi postgresql.conf find the following entries, uncomment and edit:
listen_addresses = ‘*’
hot_standby = on

cd /var/lib/postgresql/9.5/main

vi recovery.conf and add:

standby_mode          = ‘on’
primary_conninfo      = ‘host=gold.yourdomain.com port=5432 user=replication password=mt59901’
trigger_file = ‘/etc/postgresql/triggerImNowTheMaster’ # This little guy is important as it’s presence lets the server no to kick over from a SLAVE to a MASTER

chown postgres.postgres recovery.conf

This next command is WHY we weakened security above and allowed for root login. Basically, we want to make an exact copy of everything that is on the MASTER (all the files and folders in the PGDATA directory) on our SLAVE. This way when we start the SLAVE it is in sync with the MASTER.

So, issue an rsync command to copy over the files from the MASTER while preserving permissions, etc…

rsync -av -e ssh root@192.168.100.100:/var/lib/postgresql /var/lib

service postgresql restart; tail -f /var/log/postgresql/postgresql-9.5-main.log

Watch the logs for a bit to make sure postgresql came back up OK.
Control-C to stop tailing the log (or continue to watch if you wish)

The log entries should look pretty similar to the MASTER but you will begin  to see entries similar to:
LOG:  started streaming WAL from primary

On the MASTER:

tail -f /var/log/postgresql/postgresql-9.5-main.log

Watch the logs for a bit to make sure postgresql came back up OK.
Control-C to stop tailing the log (or continue to watch if you wish)

But, we need to remember to UNDO the security changes we made on the MASTER

vi /etc/ssh/sshd_config
change the line:
PermitRootLogin yes
to:
PermitRootLogin prohibit-password

service restart ssh

passwd -dl root (this re-locks the root account)
— passwd: password expiry information changed.

Some final items…

In the scenario above the logs are transmitted directly from the MASTER to the SLAVE(S) so unlike Log Shipping no common shared directory is required. However, in the walk through above the transmission of the transactions from the MASTER to the SLAVE is asynchronous so if the MASTER does fail a few transactions might be missing from the SLAVE. Now if MASTER/SLAVE match needs to be closer to perfect for your scenario there is an option in the postgresql.conf file to make this communication synchronous. But, there will be a performance impact as a record written to the MASTER will not be reported as committed to the application until it is also written on the SLAVE.

There are also some variations on this MASTER-SLAVE approach. You can have all the SLAVES look the the MASTER indicating a primary SLAVE to be synchronous and the rest will be asynchronous. Or you can have one PRIMARY SLAVE look to the MASTER as syncronyous and have many other SLAVES look to this PRIMARY SLAVE to get their updates in an asynchronous manner.

For the sake of speed above I lowered root security. DON’T DO THIS IN AN UNSECURED ENVIRONMENT AND CONSIDER ANY NETWORK SEGMENT YOU DON’T HAVE FULL PHYSICAL CONTROL OVER AS INSECURE. I did this out of haste and have a high confidence that my isolated demo/lab network is pretty secure. Priming the SLAVE is important but if I was communicating with the servers in a wild or unprotected environment I’d probably tar/gzip the the PGDATA folder (/var/lib/postgresql) being sure to preserve permissions, move this folder to the SLAVE and extract it there.  There seems to be a command (pg_basebackup) that I’m looking forward to once version 9.6 is released that will also probably make this the preferred approach.

Now that you have your MASTER and a couple of SLAVES setup. TEST, TEST, TEST!!!!

Create a new Db on the MASTER. See it show up on your SLAVES.
Create a tables in the Db on the MASTER. See it show up on your SLAVES.
Make some inserts, updates and deletes in your table. See the changes in your SLAVES.
su to postgres, run psql and enter /du and see ROLES (aka users) you create propagate from the MASTER to the SLAVES.

Take one of your slaves down for a few minutes and bring it back up. Do a “ps as | grep receiver” and you should see something similar to “postgres: wal receiver process  streaming 0/1A0519918” as it tries to catch up with the MASTER. Run the  “ps as | grep receiver” command a few times and you will see the name of the WAL file change during the catch up.

Touch /etc/postgresql/triggerImNowTheMaster on the SLAVE, chown postgres.postgres  /etc/postgresql/triggerImNowTheMaster,  shutdown the MASTER and you should be able to WRITE to the SLAVE. By the way, when you promote a SLAVE you will have to update any other SLAVES you may have to point to the new MASTER in their /var/lib/postgresql/9.5/main/recovery.conf files.

Think of other things that might go wrong and try it. Try new scenarios and try to break it in your test environment; best way to learn! Production is not the place to learn new things about your software and its configuration. Probably every Developer/Admin has learned that one the hard way at some point!   😉

Finally, I hope you found this walk through useful. Please leave a comment if you did and I always welcome upgrades. Good luck with your your new cluster.

WildFly 10 — java.lang.OutOfMemoryError: Metadata space

Occasionally, during development, WildFly seems to chew up its Metadata space. For me this seems to happen when I am deploying and undeploying an application often on my workstation during development. I find it useful to edit the standalone.conf file in the bin directory of my local wildfly install and give the application server a little more breathing room with regard to memory. I make the following edit around line 50 of this file:

Change:
JAVA_OPTS=”-Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -Djava.net.preferIPv4Stack=true”

To:
JAVA_OPTS=”-Xms64m -Xmx2G -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=2G -Djava.net.preferIPv4Stack=true”

I’m using JDK 8 so the JDK 8 HotSpot JVM is now using native memory for the representation of class metadata and is called Metaspace.

And, yes, 2G above is probably overkill. Tune to your needs and available RAM on your workstation.

WildFly 10 and WFLYTX0013

You may get the warning “WFLYTX0013: Node identifier property is set to the default value. Please make sure it is unique.” in your logs when you startup WildFly even if you are running in standalone mode. This really isn’t a problem but if you want to prevent this from showing up in your logs edit your standalone-full.xml (or the standalone config file you are using) by adding an identifier to the appropriate tag.

Find the tag <core-environment> and change it to:

<core-environment node-identifier=”localhost”>

Or, whatever you like. (Maybe the actually host name?)

But, keep in mind if your environment/application grows where you need multiple nodes THIS MUST BE A UNIQUE VALUE on each of your wildfly nodes so you will have to update/change this value as you add nodes.

Add GMail Mail-Session to Wildfly 10

 

If you find your application has the need to exchange mail with Google’s mail servers you’ll need to tweek your WildFly configuration file a bit in order to expose Gmail via JNDI to your application.

Find <subsystem xmlns=”urn:jboss:domain:mail:2.0″> and add the snippet:

<mail-session name=”java:jboss/mail/Gmail” from=”username@gmail.com” jndi-name=”java:jboss/mail/Gmail”>
<smtp-server password=”YourSecretPassword” username=”username@gmail.com” ssl=”true” outbound-socket-binding-ref=”mail-smtp-gmail”/>
</mail-session>

Find <socket-binding-group name=”standard-sockets” default-interface=”public” port-offset=”${jboss.socket.binding.port-offset:0}”> and add the snippet:

<outbound-socket-binding name=”mail-smtp-gmail” source-port=”0″ fixed-source-port=”false”>
<remote-destination host=”smtp.gmail.com” port=”465″/>
</outbound-socket-binding>

Restart WildFly and you should now be able to use GMail via a JNDI/Resource lookup java:jboss/mail/Gmail

 

JDBC Security Domain Realm to WildFly 10

In my current project I intend for my Java EE Application to authenticate users against a “security-domain” in the WildFly 10.0.0.Final Application Server. In my scenario both PostgreSQL and WildFly are installed and running on an Ubuntu server on my LAN. This server has no GUI. I have posted my install notes for both PostgreSQL 9.5 and WildFly 10 on an Ubuntu 15.10 server here:

https://gesker.wordpress.com/2016/02/09/postgresql-and-postgis-on-ubuntu-15-10/
https://gesker.wordpress.com/2016/02/09/wildfly-10-on-ubuntu-15-10/

And, I already have “datasources” setup in WildFly that make use of a database (wildflyDb) in my PostgreSQL server. In Postgres the user “wildfly” owns the database “wildflyDb.” The setup notes for these datasources is at:

https://gesker.wordpress.com/2016/02/09/adding-postgresql-9-5-datasources-to-wildfly-10/

These install notes presume you have followed my directions in the URIs above.
From your workstaion…

Launch pgadmin3 and connect to your server that is running PostgreSQL, has the db user “wildfly” and the database “wildflyDb”
After connecting execute some SQL to create some tables and join these tables into a view so that they can be made use of by security domain we are going to setup in WildFly. Now, there are many ways to do this but I tried to keep this example pretty straight forward.
— Table of Application Users
BEGIN;
CREATE TABLE IF NOT EXISTS cfg_user (
id SERIAL PRIMARY KEY,
name VARCHAR(254) NOT NULL,
email VARCHAR(254) UNIQUE NOT NULL, — email address is userName — RFC3696 and RFC5321 indicate 254 characters plus 2 angle
pswd VARCHAR(128) NOT NULL, — SHA-512, Hex encoded in the application = 128 characters
description VARCHAR(254),
enabled BOOLEAN DEFAULT ‘1’ CHECK (enabled = ‘1’ OR enabled = ‘0’),
created TIMESTAMP WITH TIME ZONE DEFAULT current_timestamp,
modified TIMESTAMP WITH TIME ZONE DEFAULT current_timestamp
);

CREATE TRIGGER cfg_user_modified BEFORE UPDATE ON cfg_user FOR EACH ROW EXECUTE PROCEDURE update_modified_timestamp();
COMMIT;

BEGIN;
INSERT INTO cfg_user (name, email, pswd) VALUES (‘First User’, ‘first@fakecompany.com’, ‘346151f63c549a27c405e302b638b63039172c524a488e5fe01eacbbc7555a6dc7a36bb3bdd58c6f904a863555d12de4dde92244377f1876c56da867a9d64fxb’); — SHA-512 – hex
INSERT INTO cfg_user (name, email, pswd) VALUES (‘Second User’, ‘second@fakecompany.com’, ’61e0ea41fd076f90864293f595f59776f9ef41ed3cc69b652c04e15dab5efdf8e9ea876219e7dcb35610baff37372162e42111f45e1f24fc1cfb840fc6cbb3xd’); — SHA-512 – hex
COMMIT;

— Table of Application Roles
BEGIN;
CREATE TABLE IF NOT EXISTS cfg_role (
id SERIAL PRIMARY KEY,
name VARCHAR(10) UNIQUE NOT NULL,
description VARCHAR(30) UNIQUE NOT NULL,
enabled BOOLEAN DEFAULT ‘1’ CHECK (enabled = ‘1’ OR enabled = ‘0’),
created TIMESTAMP WITH TIME ZONE DEFAULT current_timestamp,
modified TIMESTAMP WITH TIME ZONE DEFAULT current_timestamp
);

CREATE TRIGGER cfg_role_modified BEFORE UPDATE ON cfg_role FOR EACH ROW EXECUTE PROCEDURE update_modified_timestamp();
COMMIT;

BEGIN;
INSERT INTO cfg_role (name, description) VALUES (‘root’, ‘root aka system administrator’);
INSERT INTO cfg_role (name, description) VALUES (‘financial’, ‘financial department’);
INSERT INTO cfg_role (name, description) VALUES (‘manager’, ‘manager’);
INSERT INTO cfg_role (name, description) VALUES (‘supervisor’, ‘supervisor’);
INSERT INTO cfg_role (name, description) VALUES (‘admin’, ‘administrative assistant’);
INSERT INTO cfg_role (name, description) VALUES (‘inspector’, ‘inspector’);
INSERT INTO cfg_role (name, description) VALUES (‘customer’, ‘customer’);
COMMIT;
— Map Application Users to Roles — I have screens in my application that control the contents of this table that only the Administrator can reach
BEGIN;
CREATE TABLE IF NOT EXISTS cfg_user_role_map (
id SERIAL PRIMARY KEY,
cfg_user_id INTEGER REFERENCES cfg_user (id) ON DELETE CASCADE ON UPDATE CASCADE,
cfg_role_id INTEGER REFERENCES cfg_role (id) ON DELETE CASCADE ON UPDATE CASCADE,
UNIQUE (cfg_user_id, cfg_role_id),
enabled BOOLEAN DEFAULT ‘1’ CHECK (enabled = ‘1’ OR enabled = ‘0’),
created TIMESTAMP WITH TIME ZONE DEFAULT current_timestamp,
modified TIMESTAMP WITH TIME ZONE DEFAULT current_timestamp
);

CREATE TRIGGER cfg_user_role_map_modified BEFORE UPDATE ON cfg_user_role_map FOR EACH ROW EXECUTE PROCEDURE update_modified_timestamp();
COMMIT;

BEGIN;
— Put FirstUser in ALL Roles — Unlikely but this is just an example
INSERT INTO cfg_user_role_map (cfg_user_id, cfg_role_id) VALUES (1, 1);
INSERT INTO cfg_user_role_map (cfg_user_id, cfg_role_id) VALUES (1, 2);
INSERT INTO cfg_user_role_map (cfg_user_id, cfg_role_id) VALUES (1, 3);
INSERT INTO cfg_user_role_map (cfg_user_id, cfg_role_id) VALUES (1, 4);
INSERT INTO cfg_user_role_map (cfg_user_id, cfg_role_id) VALUES (1, 5);
INSERT INTO cfg_user_role_map (cfg_user_id, cfg_role_id) VALUES (1, 6);
INSERT INTO cfg_user_role_map (cfg_user_id, cfg_role_id) VALUES (1, 7);
— Put SecondUser in ALL Roles — Unlikely but this is just an example
INSERT INTO cfg_user_role_map (cfg_user_id, cfg_role_id) VALUES (2, 1);
INSERT INTO cfg_user_role_map (cfg_user_id, cfg_role_id) VALUES (2, 2);
INSERT INTO cfg_user_role_map (cfg_user_id, cfg_role_id) VALUES (2, 3);
INSERT INTO cfg_user_role_map (cfg_user_id, cfg_role_id) VALUES (2, 4);
INSERT INTO cfg_user_role_map (cfg_user_id, cfg_role_id) VALUES (2, 5);
INSERT INTO cfg_user_role_map (cfg_user_id, cfg_role_id) VALUES (2, 6);
INSERT INTO cfg_user_role_map (cfg_user_id, cfg_role_id) VALUES (2, 7);
COMMIT;
— Map Application Users to Roles AS a View –> for use in/by the WildFly Login-Module
BEGIN;
CREATE VIEW cfg_user_role_map_view AS
SELECT
cfg_user_role_map.id,
cfg_user.email,
cfg_role.name
FROM cfg_user_role_map
LEFT JOIN cfg_user ON cfg_user.id = cfg_user_role_map.cfg_user_id
LEFT JOIN cfg_role ON cfg_role.id = cfg_user_role_map.cfg_role_id;
COMMIT;

Ok, so our database is all setup. Add sample data as you see fit. Change tables names and columns as you see fit. Just make sure any adaptations you make match up to the code snippets we are going to add to the WildFly server below.

cd /srv/wildfly/standalone/configuration
create the file:
jdbcWildflyDbRealm.properties

Add the contents:
root=root
financial=financial
manager=manager
supervisor=supervisor
admin=admin
inspector=inspector
customer=customer

cp standalone-full.xml standalone-full.xml.backup

Copy the xml snippet below into the <security-domains> section of standalone-full.xml

<security-domain name=”jdbcWildflyDbRealm” cache-type=”default”>
<authentication>
<login-module code=”Database” flag=”required”>
<module-option name=”dsJndiName” value=”java:jboss/datasources/PgDsWildflyDb”/>
<module-option name=”principalsQuery” value=”SELECT pswd FROM cfg_user WHERE email=?”/>
<module-option name=”rolesQuery” value=”SELECT name, ‘Roles’ FROM cfg_user_role_map_view WHERE email=?”/>
<module-option name=”hashAlgorithm” value=”SHA-512″/>
<module-option name=”hashEncoding” value=”hex”/>
<module-option name=”unauthenticatedIdentity” value=”anonymous”/>
</login-module>
<login-module code=”RoleMapping” flag=”required”>
<module-option name=”rolesProperties” value=”file:${jboss.server.config.dir}/jdbcWildflyDbRealm.properties”/>
<module-option name=”replaceRole” value=”false”/>
</login-module>
</authentication>
</security-domain>
Make sure all the files associated with the WildFly server (espcially the new properties file we just created) belong the wildfly user:
chown -R wildfly.wildfly /srv/wildfly*

Finally, the constuction of your web application is outside the scope of this blog post but in order to make use of this new security-domain in the WEB-INF folder of your Java EE application you will need to add the file jboss-web.xml and add the contents:

<?xml version=”1.0″ encoding=”UTF-8″?>
<jboss-web>
<context-root>WhateverYouChoseForYourContextRoot</context-root>
<security-domain>jdbcWildflyDbRealm</security-domain>
</jboss-web>

Adding PostgreSQL 9.5 datasources to WildFly 10

In my current project I intend to use PostgreSQL 9.5 as a datasource for my Java EE Application that will be running in the WildFly 10.0.0.Final Application Server. In my scenario both PostgreSQL and WildFly are installed and running on an Ubuntu server on my LAN. This server has no GUI. I have posted my install notes for both PostgreSQL 9.5 and WildFly 10 on an Ubuntu 15.10 server here:

https://gesker.wordpress.com/2016/02/09/postgresql-and-postgis-on-ubuntu-15-10/
https://gesker.wordpress.com/2016/02/09/wildfly-10-on-ubuntu-15-10/

These install notes presume you have followed my directions in the URIs above.

In these install notes I’d like to connect my WildFly server to my PostgreSQL database so that my application can make use of the database via the datasources exposed in the WildFly application server. Or, put another way, I’d like to “wire up” my WildFly server to use my PostgreSQL database.

My WildFly is installed at /srv/wildfly. We’ll create the base database “module” there. In this case a module is just an xml file and associated jar file in a proper location in the wildfly directory tree.

cd /srv/wildfly/modules

mkdir -p org/postgres/main

cd org/postgres/main

Get the JDBC driver from http://jdbc.postgresql.org:

wget –tries=0 –continue https://jdbc.postgresql.org/download/postgresql-9.4.1207.jar

Create the module.xml file:

vi module.xml

Add these contents to the module.xml file:

<?xml version=”1.0″ ?>

<module xmlns=”urn:jboss:module:1.1″ name=”org.postgres”>

<resources>
<resource-root path=”postgresql-9.4.1207.jar”/>
</resources>

<dependencies>
<module name=”javax.api”/>
<module name=”javax.transaction.api”/>
<module name=”javax.servlet.api” optional=”true”/>
</dependencies>
</module>

In the above module setup the name of the jdbc jar file in the resource-root element of the module.xml file (postgresql-9.4.1207.jar in this case) must match the exact name of the jar file you downloaded from http://jdbc.postgresql.org in case you have to upgrade the jdbc driver later.

NOTE (Alternate way to create this module): –>  Creating the org.postgres module as above may not be the preferred way to install a module/driver in the future. There is an alternate way using the jboss-cli.sh program found in the bin directory of your WildFly installation:

# cd /temp
# wget –tries=0 –continue https://jdbc.postgresql.org/download/postgresql-9.4.1207.jar
# cd /srv/wildfly/bin
# ./jboss-cli.sh
[disconnect] connect
[yourhost:9990] module add –name=org.postgres –resources=/tmp/postgresql-9.4.1207.jar –dependencies=javax.api,javax.transaction.api
[yourhost:9990] /subsystem=datasources/jdbc-driver=postgres:add(driver-name=”postgres”,driver-module-name=”org.postgres”,driver-class-name=org.postgresql.Driver)
[yourhost:9990] \q

 

Make sure all the files associated with the WildFly server (espcially the new module we just created) belong the wildfly user:

chown -R wildfly.wildfly /srv/wildfly*

Now we need to make some edits to our standalone-full.xml file:

cd /srv/wildfly/standalone/configuration

cp standalone-full.xml standalone-full.xml.backup

Copy this snippet into the <datasources><drivers> section of standalone-full.xml:

<driver name=”postgres” module=”org.postgres”>
<datasource-class>org.postgresql.Driver</datasource-class>
<xa-datasource-class>org.postgresql.xa.PGXADataSource</xa-datasource-class>
</driver>

Copy this snippet into the <datasources> section of standalone-full.xml:

<datasource jndi-name=”java:jboss/datasources/PgDsWildflyDb” pool-name=”PgDsWildflyDb” enabled=”true” use-java-context=”true”>
<connection-url>jdbc:postgresql://localhost:5432/wildflyDb?ApplicationName=YourApplicationName</connection-url>
<driver>postgres</driver>
<pool>
<min-pool-size>5</min-pool-size>
<initial-pool-size>5</initial-pool-size>
<max-pool-size>100</max-pool-size>
<prefill>true</prefill>
</pool>
<validation>
<valid-connection-checker
class-name=”org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker”></valid-connection-checker>
<exception-sorter
class-name=”org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter”></exception-sorter>
</validation>
<security>
<user-name>wildfly</user-name>
<password>wildfly</password>
</security>
</datasource>

<xa-datasource jndi-name=”java:jboss/datasources/PgDsXaWildFlyDb” pool-name=”PgDsXaWildflyDb” enabled=”true”
use-java-context=”true”>
<driver>postgres</driver>

<xa-datasource-property name=”ServerName”>localhost</xa-datasource-property>
<xa-datasource-property name=”PortNumber”>5432</xa-datasource-property>
<xa-datasource-property name=”DatabaseName”>wildflyDb</xa-datasource-property>

<xa-pool>
<min-pool-size>5</min-pool-size>
<initial-pool-size>5</initial-pool-size>
<max-pool-size>100</max-pool-size>
<prefill>true</prefill>
</xa-pool>
<validation>
<valid-connection-checker
class-name=”org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker”>
</valid-connection-checker>
<exception-sorter class-name=”org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter”>
</exception-sorter>
</validation>
<security>
<user-name>wildfly</user-name>
<password>wildfly</password>
</security>
</xa-datasource>

Save your changes and restart the server and now your web application can make use of these datasources. My guess is that your application will most likely use the regular datasource but I’ve also added the xa-datasource as I have application where this is handy.