Quickly Reestablish Trust Older Versions of Windows

Every once in a while an older workstation or server will lose it’s trust relationship with the Windows Domain. I experienced this “lost of trust” issue recently when I had to roll back a virtual server instance of  Windows Server 2008r2 to a snapshot that was quite dated.

The error message was similar to: The trust relationship between this workstation and the primary domain failed.”

A quick Google search seems to turn up the popular solution of re-joining the domain which requires rebooting the server/workstation. This does work.

But if you are in a rush (and you are able to log into the server/workstation with a local account) give the following command a try using the command prompt:

netdom.exe resetpwd /s:DomainServer /ud:YourDomain\Administrator /pd:*

 

Of course, substitute “DomainServer” with one of your actual domain servers and substitute “YourDomain” with the name of your domain.

“netdom.exe” is probably already installed if you need to try this little trick on an older version of Windows Server. On older versions of Windows (workstation) OS, for instance version 7, you may have to install the Remote Server Administration Tools which you can find at this link in order to get this utility installed.

 

Tor Proxy & Bridge on Ubuntu 16.10

The tor project is a collection of software designed as an Internet layer/protocol that enables users send/receive network traffic of all kinds while making it difficult for a third party observer to  conduct surveillance of the traffic. The project attempts to accomplish this by bouncing the user’s network traffic around a world wide network of relays. All of this traffic bouncing make the traffic difficult to observe and difficult for the computer systems your attempt to reach by way of the tor network to know your true physical location or ip address.

The network is operated by numerous volunteers who are willing to share some of their bandwidth and a little CPU power with the rest of the network. Most participants in the network feel that this technology is an extremely important component in the mix of available technologies that can help users maintain their privacy.

As with all technology the tor network can be used for the greater good or for less noble purposes. Examples of this usage dichotomy might include helping those who live in dangerous part of the world or under the rule of oppressive regimes communicate with the rest of the world or enabling end users to attain the intellectual property of others illegally. But, I suspect most users just want to maintain their privacy.

Below is a recipe you can use to install the tor software on your computer thereby creating a node or extra hop in the network to allow or assist in the bouncing of data around the tor network helping others maintain their privacy. The software is available for several platforms but we are going to focus on installing the software on an Ubuntu 16.10 Server.

Our configuration will allow our server to act as a proxy on the LAN so that other computers on your local area network can make use of the tor proxy we are setting up and hence have their traffic relayed and obfuscated over the tor network. This recipe will also allow our server to act as a “bridge” or specialized tor relay node that is “unpublished” and known (for discussion purposes)  by only a few other nodes on the network. Each bridge node increase the opacity of the network. So, even if your node isn’t a “exit” node it is a meaningful contribution to the operation of the network.

Installing the basic software…

We are going to elevate to root, add a few repository addresses at the tor project to ensure we are getting the latest and greatest versions of the software, we’ll update our local db of available software and install the required packages.

$ sudo -s
# vi (or your favorite editor) /etc/apt/sources.list.d/tor.list

Add the lines:

deb http://deb.torproject.org/torproject.org xenial main
deb-src http://deb.torproject.org/torproject.org xenial main
deb http://deb.torproject.org/torproject.org obfs4proxy main

Note: We are installing on Ubuntu 16.10 “Yakkety” but two of the entries above are for Ubuntu 16.04 “Xenial.” It’s OK. It seems that the required packages on the tor Xenial repository site are more up to date than those found in the official Ubuntu 16.04 Xenial repositories and they run just fine on Ubuntu 16.10 Yakkety release. I don’t know if the tor project will have a specific Ubuntu 16.10 Yakkety repository but for now the entries above will get the job done. This of course also begs the question why aren’t the most current stable versions of the tor packages already in the main Ubuntu repository? I don’t know the reason for this either. I suspect it’s probably a timing isssue of the when the software is added to the “upstream” Debian project from which Ubuntu draws many software packages. But, I am unsure if this supposition is correct.

Next we need to download and add the gpg key for the tor project repository so that we can be sure our standard apt tools to read from the repository and ensure that the packages are indeed those published by the project.

# gpg –keyserver keys.gnupg.net –recv A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89
# gpg –export A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89 | sudo apt-key add –

Now let’s update and install the software:

# apt-get install aptitude (my preferred apt tool)
# aptitude update
# aptitude install –with-recommends tor deb.torproject.org-keyring obfs4proxy tor-arm

Before we go any further there is a bit of a glitch/typo in one of the apparmor (firewall) files that is a part of the Ubuntu 16.10 distribution. We need to edit this file and fix the glitch/typo before we continue with the configuration of tor on our server.

# vi /etc/apparmor.d/abstractions/tor

Change 2 entries/lines in this file from:

/usr/bin/obfsproxy PUx,
/usr/bin/obfs4proxy PUx,

to:

 /usr/bin/obfsproxy ix,
/usr/bin/obfs4proxy ix,

After editing and saving the updated file issue the command:

# apparmor_parser -r /etc/apparmor.d/system_tor

Now that we’ve addressed the apparmor glitch/typo let’s start configuring tor. There are two files we need to address. In the first file of interest is the “default” file used by the tor init.d script. We just want to make a quick tweak and this step is optional.

# vi /etc/default/tor

Uncomment and alter the line:

# NICE=”–nicelevel 5″

to:

NICE=”–nicelevel 15″ (We removed the # and changed 5 to 15)

The impact of the above edit was to change the priority that the operating system will give to the tor proxy when it runs. 15 is actually a LOW priority so we know that tor won’t be intrusive on other items we have running on our server. Again, the above is optional and not critical.

Next, we’re going to clear out the default main configuration file for tor and just put in our own entries. But, we’ll back up the original file first just in case we need the file for reference later.

# cp /etc/tor/torrc /etc/tor/torrc.orig (back this file up just in case)
# vi /etc/tor/torrc

Make the following entries in the file. Of course, change 196.168.100.100 to the address of your server. Change the SOCKSPolicy network/mask to match your network. Update the Address and Nickname values to meet your needs.

SOCKSPort 9050 # Default: Bind to localhost:9050 for local connections.
SOCKSPort 192.168.100.100:9050 # Bind to this address:port so available to the LAN
SOCKSPort 192.168.100.100:9100 # Bind to this address:port so available to the LAN

SOCKSPolicy accept 192.168.0.0/16 # Allow any device on the LAN to use the proxy
SOCKSPolicy accept6 FC00::/7 # Allow any IP6 device on the LAN to use the proxy
SOCKSPolicy reject * # Deny anything else

Log notice file /var/log/tor/notices.log # Log all notices from the proxy
#Log debug file /var/log/tor/debug.log # Uncomment this line only if you really need it

RunAsDaemon 0 # init.d script is already running tor as daemon
DataDirectory /var/lib/tor # Default value
ControlPort 9051 # Default value
HashedControlPassword 16:872860B76453A77D60CA2BB8C1A7042072093276A3D701AD684053EC4C # Probably should hash a new password and replace this default control password

HiddenServiceStatistics 1 # Default Value

ORPort 443 # Default value is actually 9001
#DirPort 80 # Commented out because we are configuring as a bridge
#DirPortFrontPage /etc/tor/tor-exit-notice.html # Commented out because we are configuring as a bridge
Address externalfacing.address.com # Change to suit
Nickname yourNodeNickName # Change to suit
OutboundBindAddress 192.168.100.100 # Change to suit
DisableDebuggerAttachment 0

AccountingMax 40 GB # Increase if you can be more generous per month (e.g 100 GB)
AccountingStart month 1 00:00 # Start counting the 40 GBytes above on 1st of month
RelayBandwidthRate 100 KB # Increase if you can be more generous (e.g. 2 MB)
RelayBandwidthBurst 200 KB # Increase if you can be more generous (e.g. 4 MB)

ContactInfo Your Name <you@whatever.com> # Just in case network operators need to reach your

ExitPolicy reject *:* # no exits allowed
BridgeRelay 1 # Yes, our node is to operate as a bridge
PublishServerDescriptor 0 # Don’t let the world know I’m here

ServerTransportPlugin obfs3,obfs4 exec /usr/bin/obfs4proxy managed # Obfuscate Traffic

All done setting configuring tor. Restart the tor service.

# service tor restart

Now, one interesting thing above. Tor usually runs on port 9001 but we are going to run it on port 443 (https) which is a very common port for secure Internet traffic and so is less likely to draw attention. Feel free to stick with port 9001 if you already have services or server on this system using port 443. Use of port 443 in this example is just a preference of mine.

A second interesting point is that we are using the obfs4proxy (which is also backwards compatible with more widely deployed version 3 obfs3proxy) which endeavors to make our traffic to/from our node less obvious so a third party observer.

Thirdly, we are setup as a “proxy” on your local area network. So, any PC or Program (web browser, irc, chat, etc.) that can make use of a proxy can use 192.168.100.100 (or really the actual address of your server on your LAN) on either port 9050 or port 9100.

Finally, it is important to know that all the above if for naught if you can’t connect to the tor network and the network can’t connect to your server. So you will need to make sure your Internet firewall/router is set to port forward incoming traffic on port 443 to your internal server on port 443 of the server that we just setup for proxy & bridge above.

You can monitor the activity of your node using the “arm” command which is kind of like top/htop for your tor node. And of course you can always see what is generally going on with your node by tailing the /var/log/tor/notices.log

# tail -f /var/log/tor/notices.log
# arm

To learn more about tor, alternative configuration options and other software from the tor project check out http://torproject.org.

KDE Neon: Now with Wayland!

This post may be a bit on the late side but I find the topic interesting enough to share with you. Recently, the Kde Neon project released a developer edition of their Linux distribution that included Wayland as the display server. Wayland purports to be a lighter, simpler and easier to use replacement for the ubiquitous X Window System and protocols.
I’ve taken this release for a spin. And, I must stress that it is a development release so it is not intended for production use so probably best to hold off on trying this on your main PC/Notebook. My first impression is this…

“Wow, I have nothing to report! It just worked.”

I’ve installed the distribution on an older System76 Gazelle notebook. This system has an Intel I7-3610Q 8 Core Processor Running 3.3 Ghz, 16 GB of RAM and so uses the 3rd Gen Core processor Graphics Controller. So, admittedly it’s a pretty “Plain Jane” system. Still, it’s been a super reliable system and I would have had no idea that it was running Wayland if I didn’t install it from this particular development release on my own.

Consider that an rock solid endorsement for KDE/Plasma/Wayland on this hardware configuration. When a technology works so well you don’t notice it that is a great thing. If Wayland indeed makes life easier for the end developers maintaining the window system, developers preparing drivers and of course those developers writing software for end users that will run atop the package all the better. And, just an impression with no metrics it “feels” snappy. Wayland does look to be an important evolution to the graphics subsystem and perhaps more distributions will adopt it as the default.

I’m looking forward to seeing what others experience with the transition from X to Wayland. I have come to understand there is still work to be done on the Wayland project with regard to drivers on higher end graphics cards and am excited to see how Wayland will evolve as these issues are addressed. I have two primary production workstations with pretty good graphics cards and I’d like to take as much advantage of the hardware as possible.


On another note though I’m pretty new to the Kde Neon distribution I like it. It is based on the most stable release of Ubuntu Linux but has a focus on using the most current KDE Plasma Desktop. I am a big fan of Ubuntu but was kind of curious as to why the KDE Neon project didn’t go “upstream” and just pivot directly off of the “notoriously and obsessively stable” (a good thing) Debian of which Ubuntu is a derivative. Again, not an issue just an item of curiosity. I should also mention KDE Neon ships as a pretty bare bones distribution.

I actually like this approach as I’m for anything that keeps the bloat in my system down. And, the packaging/repository system of Debian and its derivatives makes adding the packages that you use and enjoy a pretty simple matter. For me I just needed installed some basic productivity and development packages upon which I depend and a little bit of my own bloat if you will:


# apt-get install aptitude
# aptitude update
# aptitude dist-upgrade
# aptitude install --with-recommends kgpg libreoffice calligra gimp vim-gtk postgresql postgresql-contrib postgis qgis openjdk-8-jdk openjdk-8-demo openjdk-8-doc kdevelop kompare build-essential pkg-config libc6-dev m4 gcc g++ g++-multilib autoconf libtool libncurses5-dev zlib1g-dev automake git subversion ant maven k3b dragonplayer krfb chromium-browser kmahjongg fortune-mod cowsay screenfetch  boinc-manager  htop iotop smartmontools

There are still a few other packages I am using that don’t seem to be the standard repositories or are stale in the repositories. But, these can be easily downloaded from their respective project websites:

Smart-git– http://www.syntevo.com/smartgit/download?file=smartgit/smartgit-8_0_3.deb
Chrome — https://www.google.com/chrome/browser/desktop/index.html
Earth — http://www.google.com/earth/download/ge/agree.html

And, then there are a few packages that don’t seem to have debs/repositories and have to be downloaded and installed in a more manual fashion. These packages are widely used and it does seem odd that the most current stable versions these packages are not already in the “Universe” repositories:

Netbean — https://netbeans.org/downloads/
(Hopefully, this will change now that it is an Apache project.)

Intellij — https://www.jetbrains.com/idea/download/download-thanks.html?platform=linux
(C’Mon Jetbrains! At least add your own repository. You have a license/key activation system anyway.)

Gradle — https://services.gradle.org/distributions/gradle-3.1-all.zip
(Ironic as it is a build system and deb package creation/push could just be a task in the main project.)

WildFly — http://download.jboss.org/wildfly/10.1.0.Final/wildfly-10.1.0.Final.zip
(Maybe Ubuntu deb ignored because this project is tied closely to RedHat? Just a pity if that is the reason.)

And, of course there is the latest “toy” I’m playing with and find interesting — the new ZCash Crypto-Currency — that does have a deb/repository but is still a new project and as of yet requires a number of  additional configuration and setup steps. I’ve written these steps in a previous blog entry:

Zcash — https://gesker.wordpress.com/2016/11/04/zcash-on-ubuntu-debian/


In short, check out the KDE Neon distribution. I like it. You may find it suits you as well. And, if you have a spare system take the new KDE Neon Wayland development release for a spin. Hopefully, you will have as little to note with regard to KDE on Wayland as I have found. And, that is highest praise one can give to a subsystem!


Please post comments below and share your thoughts and experiences. More importantly if you come across bugs or regressions let the KDE Neon project know by filing a bug report.

ZCash on Ubuntu & Debian

ZCash is the newest crypto-currency on the block. As of this writing the project is at version 1.0.1 of its client. And, the project seems Debian/Ubuntu friendly as the project maintainers have been pushing the official client in pre-built deb packages for general consumption. Very nice. Thank you z.cash project!

There seems to be a lot of help/instructions on the web. I’ve tried to summarize what I have found at various locations into this recipe below. But, https://z.cash should be considered the final authority on setup and configuration.

This write up intends to provide quick and dirty instructions on:

  1. Installing the packages
  2. Configuring ZCash
  3. Controlling daemon and mining behavior with user level crontab entries

I make the following assumptions:

  1. You have the ability to elevate to root
  2. You are OK with running the ZCash node all the time
  3. You would like to mine coins in the background in your off hours
  4. You would like to participate in a mining pool
  5. You do not want to mine coins during your work hours
  6. You will be running the ZCashd daemon as a regular user
  7. You don’t want to bother with a init.d or systemd script for the moment

At the end of the recipe I pose a few questions. Please comment or add to the discussion if you wish. I hope in advance you find this write-up useful so let’s begin!

Elevate to Root and Install:

$ sudo -s
$ sudo apt-get install apt-transport-https
$ wget -qO – https://apt.z.cash/zcash.asc | sudo apt-key add –
$ echo “deb [arch=amd64] https://apt.z.cash/ jessie main” | sudo tee /etc/apt/sources.list.d/zcash.list
$ apt-get update && sudo apt-get install zcash

Become a regular user and complete the configuration:

Return to regular user after sudo tasks above…
$
exit

Prime ZCash (keys and zkSnark stuff)…
$ zcash-fetch-params

Create configuration directory…
$ mkdir -p ~/.zcash

Add entries to the configuration file…
$ echo “addnode=mainnet.z.cash” >~/.zcash/zcash.conf
$ echo “rpcuser=username” >>~/.zcash/zcash.conf
$ echo “rpcpassword=head -c 32 /dev/urandom | base64” >>~/.zcash/zcash.conf
$ echo ‘equihashsolver=tromp’ >> ~/.zcash/zcash.conf
$ echo ‘#gen=1’ >> ~/.zcash/zcash.conf
$ echo “#genproclimit=$(nproc)” >> ~/.zcash/zcash.conf

Note: the last two entries above are going into the zcash.conf file commented out. This is OK as we will set these parameters in the crontab entries below. You may decide you want to just run ZCash without my crontab entries so you’ll be able to just uncomment those configuration lines.

Create an address (You will use this instead of my address in the crontab entries below when mining coins and will add the suffix rig1, rig2, etc for each machine where you are mining coin.)…
$ zcash-cli z_getnewaddress
$ zcash-cli z_listaddresses

Make you user level crontab entries…
$ crontab -e

# Start the ZCash daemon on reboot using only one (1) core for mining
@reboot /usr/bin/zcashd -reindex -showmetrics=0 -gen=1 -genproclimit=1 -stratum=stratum+tcp://us1-zcash.flypool.org:3333 -user=t1PBheBAnP8iAPjCWbmPsrgyjW7mWpKv4Ct.rig1

# Kill the ZCash daemon running on one (1) core @ 5:25 PM — E.g After Work
25 17   * * *   /usr/bin/pkill zcashd
# Start the ZCash daemon using more processing power (8 cores instead of 1) — E.g After Work
30 17   * * *   /usr/bin/zcashd -reindex -showmetrics=0 -gen=1 -genproclimit=8 -stratum=stratum+tcp://us1-zcash.flypool.org:3333 -user=t1PBheBAnP8iAPjCWbmPsrgyjW7mWpKv4Ct.rig1

# Kill the ZCash daemon using many cores @ 7:25 AM — E.g. Before Work
25 07   * * *   /usr/bin/pkill zcashd
# Start the ZCash @ 7:30 AM using only one (1) core for mining — E.g. Before Work
30 07   * * *   /usr/bin/zcashd -reindex -showmetrics=0 -gen=1 -genproclimit=1 -stratum=stratum+tcp://us1-zcash.flypool.org:3333 -user=t1PBheBAnP8iAPjCWbmPsrgyjW7mWpKv4Ct.rig1

Note:  flypool.org is just the first mining pool I found that didn’t require an account so I have been using it for testing. Please leave a comment below if you know of a good one and wish to make a recommendation. In addition I have seen that there are other mining packages out there that can be compiled and installed with little trouble but for now I’m content to try out the base client for pool mining.

Note: genproclimit is just how many cores you wish to use when mining. At the command line issue the “nproc” command to learn/confirm how many cores you have in your system. Adjust the genproclimit in the crontab entries to suit your needs.

Note: Simply using zcashd -daemon isn’t a bad option either if your not worried about system responsiveness. Your crontab -e entry would just be @reboot /usr/bin/zcashd -daemon just make sure you uncomment the gen and genproclimit lines in ~/.zcash/zcash.conf

Items I’m curious about:

  1. What is the ETA on the complete/definitive manual?
  2. ETA on “official” Android/iOS app?
  3. Is it good practice to use a Tor proxy or is this overkill?
  4. Can all of the command line options found in “zcashd –help” be used in the ~/.zcash/zcash.conf file?
  5. Will there be “official” GUIs from the project to administer:
    1. mining (including pool)?,
    2. wallet(s)?
    3. transaction processing?
      1. If so I hope there it is implemented in KDE or Java SWING so that it behaves the same on Windows/Linux/Mac etc.
      2. Minimize to system tray on KDE Plasma/Gnome/Windows/Etc.
  6. Will there be an “official” z.cash mining pool hosted at z.cash?
    1. A mining contribution going back to the project might be nice.
  7. Will the daemon get some usability functionality? For instance…
    1. Mine when processor is available?
    2. Or, “nice” settings in the zcash.conf file?
    3. Or, more sophisticated date/time or sleep controls than afforded by my crontab approach above?
    4. Again, would be great if all this could be controlled and monitored via an “official” GUI
  8. Will the “official” client get GPU mining capabilities to take advantage of graphics cards?
    1. As mentioned above I have noticed some alternate mining software is becoming available but haven’t tried them as of yet.
  9. There are a number of “Crypto-Currency Exchanges” so will one or more of these be designated as “definitive” by the project?
    1. I’ve noticed that http://kraken.com has already added zcash trading
      1. Not a plug. Just an observation that it was fast considering the project has only been “live” for a short time.
  10. What will the peer review look like on the new currency?
  11. How much peer review does the project require to confirm that the currency is secure and anonymous?

Finally, thank you to the members of the Z.Cash project for the new Crypto-Currency! Please check them out directly at: https:/z.cash

PostgreSQL 9.5 Streaming Cluster on Ubuntu 16.04

It can be very useful to have PostgreSQL configured in a cluster. PostgreSQL is able to do a few different kinds of clustering right “out of the box.” But, each of the ones that are available out of the box are of the MASTER-SLAVE scenario. Meaning that the first server handles all the writes and reads (the MASTER) and a second server (the SLAVE) gets a COPY of everything that is written to the MASTER.

PostgreSQL offers a few primary different ways for the Db to achieve this — log shipping, binary and streaming.

But, why is this useful? There are a few reasons. First, if your MASTER server goes down you can promote the SLAVE to be the new MASTER and get your database dependent application back on line quickly with little chance of data loss. Second, if you have some very long or intensive queries these can be run against the SLAVE leaving the MASTER to be more responsive (reduced load) for other write activity in your application. Third, boring but necessary I/O intensive administrative tasks like backups can be run against the SLAVE again taking load off the MASTER.

There do appear to be some downsides to the “out of the box” solutions in PostgreSQL. First, there does not appear to be an automated fail-over mechanism meaning that you will have to take a few steps to ensure that your MASTER is truly down and then promote your SLAVE to be the new MASTER. (I’m not 100% on that point.) And, in the MASTER-SLAVE cluster your application cannot WRITE to the SLAVE!

This kind of cluster where your application and read and write to any database server in the cluster and the servers all stay in sync is called a MASTER-MASTER cluster and doesn’t appear to be an “out of the box” option in PostgreSQL 9.5. But, that being said there appear to be many options out there to address this needs. If not true MASTER-MASTER at least intelligent load balancing and/or fail over. Perhaps even too many other options!?!?

The application I am working on now could make good use of a true MASTER-MASTER scenario; the servers working together without being fronted by some balancer solution. But, as I said above there are many approaches to fail-over and MASTER-MASTER cluster configurations so when I have a chance to investigate the current options out there and choose one for my production application I’ll write up what I found. Also, if you follow the PostgreSQL project much it seems this project moves at light speed. I wouldn’t be surprised if MASTER-MASTER is built in sometime soon. I don’t know that but again these guys/gals seem to make useful and solid improvements on this project VERY quickly.

Enough discussion lets get to work!
The run through below is an example of a scenario with one MASTER and 2 SLAVES.

I have 3 servers:
MASTER — Gold (192.168.100.100) –> WRITE new data
SLAVE1 — Silver  (192.168.100.101) –> Large and or intensive READ queries
SLAVE2 — Bronze (192.168.100.102) –> Backups routines and well I’ve got room for a third so why not.

A few assumptions:
You are running Ubuntu Server 16.04 (Latest stable as of this blog post)
You are familiar with the command line and editing files. I us vi below but use any editor you like.
You have a little patience…

On the MASTER:
Let’s get PostgreSQL installed…

sudo -s (Enter your password when prompted)
apt-get install aptitude
aptitude update
aptitude install –with-recommends postgresql postgis

su postgres
psql
CREATE ROLE replication WITH REPLICATION PASSWORD ‘YourSecretPassword’ LOGIN;
\q
exit

cd /etc/postgresql/9.5/main

vi pg_hba.conf and append the line:
host    all     all     192.168.0.0     255.255.0.0     md5
host    replication replication 192.168.100.100/16 md5 #Gold
host    replication replication 192.168.100.101/16 md5 #Silver
host    replication replication 192.168.100.102/16 md5 #Bronze

Note: In the above entries I use /16 and 255.255.0.0 for the netmask part of my entires. Of course adjust the IP scheme and netmask to match your network and server entries. Also, I put all my cluster members in the pg_hba.conf file on each cluster member just so I don’t have to worry about it as I play with promoting and demoting members to/from MASTER/SLAVE status.

vi postgresql.conf find the following entries, uncomment and edit:
listen_addresses = ‘*’
wal_level = hot_standby
max_wal_senders = 10 # Probably overkill as I only have a few slave servers but may add more
wal_keep_segments = 2048 # Probably _WAY_ overkill but as I would’t expect my SLAVE to be down that long. But, I’ve got plenty of disk space so…

service postgresql restart; tail -f /var/log/postgresql/postgresql-9.5-main.log

Watch the logs for a bit to make sure postgresql came back up OK.
Control-C to stop tailing the log

Loosen security for just a little while…

We need to make a couple of changes in security that we will UNDO later in these instructions. These temporary changes in security are needed as we will need to ‘prime’ the SLAVE in a future step to make sure it is ready to go.

Reconfigure SSH…

vi /etc/ssh/sshd_config
change the line:
PermitRootLogin prohibit-password
to:
PermitRootLogin yes

service restart ssh

Enable root login and password…
passwd root
passwd -u root

Again, don’t worry after we ‘prime’ the SLAVE we are going to lock the root account back down.

Ok, so the MASTER is now good to go.  Let’s work on our SLAVE.

On the SLAVE (Both of them if you decide you need two as well):
Let’s get PostgreSQL installed…

sudo -s (Enter your password when prompted)
apt-get install aptitude
aptitude update
aptitude install –with-recommends postgresql postgis

cd /etc/postgresql/9.5/main

vi pg_hba.conf and append the line:
host    all     all     192.168.0.0     255.255.0.0     md5
host    replication replication 192.168.100.100/16 md5 #Gold
host    replication replication 192.168.100.101/16 md5 #Silver
host    replication replication 192.168.100.102/16 md5 #Bronze

Note: In the above entries I use /16 and 255.255.0.0 for the netmask part of my entires. Of course adjust the IP scheme and netmask to match your network and server entries. Also, I put all my cluster members in the pg_hba.conf file on each cluster member just so I don’t have to worry about it as I play with promoting and demoting members to/from MASTER/SLAVE status.

cd /etc/postgresql/9.5/main

vi postgresql.conf find the following entries, uncomment and edit:
listen_addresses = ‘*’
hot_standby = on

cd /var/lib/postgresql/9.5/main

vi recovery.conf and add:

standby_mode          = ‘on’
primary_conninfo      = ‘host=gold.yourdomain.com port=5432 user=replication password=mt59901’
trigger_file = ‘/etc/postgresql/triggerImNowTheMaster’ # This little guy is important as it’s presence lets the server no to kick over from a SLAVE to a MASTER

chown postgres.postgres recovery.conf

This next command is WHY we weakened security above and allowed for root login. Basically, we want to make an exact copy of everything that is on the MASTER (all the files and folders in the PGDATA directory) on our SLAVE. This way when we start the SLAVE it is in sync with the MASTER.

So, issue an rsync command to copy over the files from the MASTER while preserving permissions, etc…

rsync -av -e ssh root@192.168.100.100:/var/lib/postgresql /var/lib

service postgresql restart; tail -f /var/log/postgresql/postgresql-9.5-main.log

Watch the logs for a bit to make sure postgresql came back up OK.
Control-C to stop tailing the log (or continue to watch if you wish)

The log entries should look pretty similar to the MASTER but you will begin  to see entries similar to:
LOG:  started streaming WAL from primary

On the MASTER:

tail -f /var/log/postgresql/postgresql-9.5-main.log

Watch the logs for a bit to make sure postgresql came back up OK.
Control-C to stop tailing the log (or continue to watch if you wish)

But, we need to remember to UNDO the security changes we made on the MASTER

vi /etc/ssh/sshd_config
change the line:
PermitRootLogin yes
to:
PermitRootLogin prohibit-password

service restart ssh

passwd -dl root (this re-locks the root account)
— passwd: password expiry information changed.

Some final items…

In the scenario above the logs are transmitted directly from the MASTER to the SLAVE(S) so unlike Log Shipping no common shared directory is required. However, in the walk through above the transmission of the transactions from the MASTER to the SLAVE is asynchronous so if the MASTER does fail a few transactions might be missing from the SLAVE. Now if MASTER/SLAVE match needs to be closer to perfect for your scenario there is an option in the postgresql.conf file to make this communication synchronous. But, there will be a performance impact as a record written to the MASTER will not be reported as committed to the application until it is also written on the SLAVE.

There are also some variations on this MASTER-SLAVE approach. You can have all the SLAVES look the the MASTER indicating a primary SLAVE to be synchronous and the rest will be asynchronous. Or you can have one PRIMARY SLAVE look to the MASTER as syncronyous and have many other SLAVES look to this PRIMARY SLAVE to get their updates in an asynchronous manner.

For the sake of speed above I lowered root security. DON’T DO THIS IN AN UNSECURED ENVIRONMENT AND CONSIDER ANY NETWORK SEGMENT YOU DON’T HAVE FULL PHYSICAL CONTROL OVER AS INSECURE. I did this out of haste and have a high confidence that my isolated demo/lab network is pretty secure. Priming the SLAVE is important but if I was communicating with the servers in a wild or unprotected environment I’d probably tar/gzip the the PGDATA folder (/var/lib/postgresql) being sure to preserve permissions, move this folder to the SLAVE and extract it there.  There seems to be a command (pg_basebackup) that I’m looking forward to once version 9.6 is released that will also probably make this the preferred approach.

Now that you have your MASTER and a couple of SLAVES setup. TEST, TEST, TEST!!!!

Create a new Db on the MASTER. See it show up on your SLAVES.
Create a tables in the Db on the MASTER. See it show up on your SLAVES.
Make some inserts, updates and deletes in your table. See the changes in your SLAVES.
su to postgres, run psql and enter /du and see ROLES (aka users) you create propagate from the MASTER to the SLAVES.

Take one of your slaves down for a few minutes and bring it back up. Do a “ps as | grep receiver” and you should see something similar to “postgres: wal receiver process  streaming 0/1A0519918” as it tries to catch up with the MASTER. Run the  “ps as | grep receiver” command a few times and you will see the name of the WAL file change during the catch up.

Touch /etc/postgresql/triggerImNowTheMaster on the SLAVE, chown postgres.postgres  /etc/postgresql/triggerImNowTheMaster,  shutdown the MASTER and you should be able to WRITE to the SLAVE. By the way, when you promote a SLAVE you will have to update any other SLAVES you may have to point to the new MASTER in their /var/lib/postgresql/9.5/main/recovery.conf files.

Think of other things that might go wrong and try it. Try new scenarios and try to break it in your test environment; best way to learn! Production is not the place to learn new things about your software and its configuration. Probably every Developer/Admin has learned that one the hard way at some point!   😉

Finally, I hope you found this walk through useful. Please leave a comment if you did and I always welcome upgrades. Good luck with your your new cluster.

WildFly 10 — java.lang.OutOfMemoryError: Metadata space

Occasionally, during development, WildFly seems to chew up its Metadata space. For me this seems to happen when I am deploying and undeploying an application often on my workstation during development. I find it useful to edit the standalone.conf file in the bin directory of my local wildfly install and give the application server a little more breathing room with regard to memory. I make the following edit around line 50 of this file:

Change:
JAVA_OPTS=”-Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -Djava.net.preferIPv4Stack=true”

To:
JAVA_OPTS=”-Xms64m -Xmx2G -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=2G -Djava.net.preferIPv4Stack=true”

I’m using JDK 8 so the JDK 8 HotSpot JVM is now using native memory for the representation of class metadata and is called Metaspace.

And, yes, 2G above is probably overkill. Tune to your needs and available RAM on your workstation.

WildFly 10 and WFLYTX0013

You may get the warning “WFLYTX0013: Node identifier property is set to the default value. Please make sure it is unique.” in your logs when you startup WildFly even if you are running in standalone mode. This really isn’t a problem but if you want to prevent this from showing up in your logs edit your standalone-full.xml (or the standalone config file you are using) by adding an identifier to the appropriate tag.

Find the tag <core-environment> and change it to:

<core-environment node-identifier=”localhost”>

Or, whatever you like. (Maybe the actually host name?)

But, keep in mind if your environment/application grows where you need multiple nodes THIS MUST BE A UNIQUE VALUE on each of your wildfly nodes so you will have to update/change this value as you add nodes.