Tuesday, November 11, 2014

IoC ( Internet of Cigars) - enabled humidor!

The latest trend: IoC or Internet of Cigars is here! :)

I tried for a long time to keep humidity in my cigar humidor at constant level. No luck. I have humidity beads from Heartfelt Industries designed to keep 65% humidity but still there was too much fluctuation in humidity, one loss of all cigars because of mold and too much work to check the humidity levels often enough.

That was before my humidor was connected to the Internet of Cigars.

The sensor

My humidor has Zwave humidity sensor Everspring ST814 that sends report to my home automation system every time humidity changes 5% and temperature by 0,1C. I would like to have a sensor that reports humidity even more often but the market is small.

Home automation system itself doesn't do anything for data because it has limited capabilities for graphing the historical values so I've configured it that is only receives the Zwave reports. Home Automation software I use (OSA) has also RESTful interface so it's easy to make queries of current humidity in humidor.

The monitoring and graphing


For graphing and alerts I use Cacti, widely-used monitoring and graphing tool. Cacti is open-source which means it's not the user-friendliest software in the world but like many OS software it is powerful and versatile. It is easy to write scripts to Cacti that then acts as sensors. The Cacti is written in Perl so the native language for monitoring scripts is also Perl. The Cacti script for Zwave humidity sensor simply fetches the RESTful data in JSON and reads the temperature and humidity values to Cacti. With Perl it's only a 10-20 lines of code.

The database 


Cacti uses RRD database for sensor data and RRD stands for Round Robin Database. RRD saves values in pre-created constant-sized database files and when it reaches the end of file it starts over in Round Robin fashion. There can be many files in that way that one file contains data in week level, one in month level and one in year level each with different temporal resolutions.



Here's a hourly view.


Daily view, same straight line so let's skip the Weekly and Monthly views and go straight to Yearly view.


The notifications


Cacti has a plugin architecture and it being open source, it has lots of plugins. Plugin I use to alert me is Threshold. When the humidor humidity is too high(>75%) or too low(<60%), the Threshold plugin sends me an email every four hours. As you can see from the yearly view I have to add some water to humidor once in a month. I could tighten the warning levels a little bit but this is not too much work for a lazy smoker I am. The alerts are send with sendmail so some additional configuration was needed that Cygwin environment was able to send mails via SSMTP.


 
My Internet of Cigars keeps a constant watch of my cigars well-being and gives orders to water boy (me) to refill the beads with water. Now I can enjoy my cigars with no stress and wait for IoC to appear in Gartner reports :)


Sunday, November 9, 2014

Reason why Donetsk People's Republic's elections held in 2.11.2014 are fraud


First you should read the blog post I use as a reference.

Elections in Donets - anatomy of a fraud.
I think this needs more publicity. I try to explain how this kind of results of elections in East Ukraine are impossible.

First the official results:
Zaharchenko     : 765 340 / 969 644
Kofman             : 111 024 / 969 644
Sivokonenko      :  93 280 / 969 644
Discarded votes : 43 039 / (969 644+43 039)

Everything seems to be ok?

Then we calculate the percentages:
Zaharchenko     : 765 340 / 969 644 = 78,92999905119820000%
Kofman             : 111 024 / 969 644 = 11,44997545490920000%
Sivokonenko      :  93 280 / 969 644 = 9,62002549389260000%
Discarded votes :  43 039 / (969 644+43039) = 4,24999728444143000%

The problem with these figures is that the first two decimals are ok, but then at least the next two decimals are either 00 or 99 which means they are very close to two-decimal numbers. All of them.

Lets take Zaharchenko's votes for further examination:
Zaharchenko: 765 340 / 969 644 = 78,92999905119820000%
That is really close to 78,93%
In fact it is not possible to get percentage to 78,93% because you would not get whole votes:
969 644 * 78,93% = 765340,0092000 votes
So if you first "invented" the two decimal percentage as voting result and then calculate the vote count, you would have to truncate to 765340.

There were 969 644 accepted votes. That means there are 969 644 possible results for Zaharchenko and probability of any result is 1/969644.

There are 10000 possible percentages with two decimals. Because the accurate two decimals results are not possible and it's almost always between two votes, lets take the both upper and lower vote counts. So possible vote counts for two decimal percentage are ~20000.

The probability that Zaharchenko's vote count is one vote near the two decimal percentage is 20000/969644 ~ 1/50. That is 2% probability. That is not impossible, that could happen. It means that if you held elections anywhere in the world and you pick 50 candidates, at least one of them is propable to have two decimal percentage.

Let's then get the two decimal percentages for all the candidates:
Zaharchenko     : 765 340 / 969 644 = 78,93%
Kofman             : 111 024 / 969 644 = 11,45%
Sivokonenko     :   93 280 / 969 644 = 9,62%
Discarded votes :  43 039 / (969 644+43 039) = 4,25%

It's here when things get highly improbable.
Zaharchenko's probability 1/50
Kofman's probability 1/50
Sivokonenko probability 1/50
Discarded votes probability 1/50.

Because sum of candidates votes is 100% it is not possible that two candidates have two decimal percentages and the third to have more decimals. So we only use probabilities of the candidates because the third one's probability is "tied" to two first ones probability and therefore his probability is 1/1 meaning it always happens. The probability that the three candidates all have two decimal percentages is therefore 1/50 * 1/50 * 1 = 1/2500.

Then we add the discarded votes and the probability that all the percentages have two decimals is 1/2500 * 1/50 = 1/125 000 or 0,0008% probability . That's the smoking gun. That means DNR should have 125 000 elections and get one result like this. Of course, it is still possible. At least in Donetsk People's Republic. 

PS. The elections held in May 2014 in DNR and LNR had only one decimal percentages.

Monday, November 3, 2014

Creating manual standby in Oracle 12c SE with PDB/CDB Multitenant architecture

Creating a manual standby with Oracle 12c SE is a cost-effective way to create entry level fault tolerance. With manual standby the worst case scenario is that you lose updates of a couple of last minutes in case of server failure. This depends of how you have configured your database to do log switches and archive log copying to standby server. If you do log switches every 5 minutes and rsync every minute at worst you can loose 6 minutes worth of updates. Manual standby also has to be activated manually. There will be no automatic failover.


With Standard Edition you are only allowed to create manual standby, not Managed Standby. This means you have to copy the archive logs manually and do the recovery manually. If you keep also the standby mounted but not in open mode, you don't have to have a license for a standby server.
The Care and Feeding of a Standby Database

"The Managed Standby normally does not require a separate Oracle License because it is not open for use.  Normally only one database is active at a time.  Notice the use of the word normally.  If you use your standby for reporting or run it open-read only (11g) you will need to licenses the database."
"To create a Managed Standby database, you must be using the Enterprise Edition of the database.  If you are using the Standard Edition of the database, you can still create a manual standby database, but not a Managed Standby."

With Standard Edition  you also can create Pluggable Databases but are only allowed for one PDB per CDB. You will not get all benefits of Multitenant architecture but you will be able to upgrade the database with little less downtime.
Multitenant with Only One Pluggable Database: An Upgrade Option to Consider

"In using a single PDB, the features of plugging and unplugging are available. Being able to do this provides another patching option in that you can unplug the PDB and plug it into an already-patched CDB. This can provide a patching scenario with less downtime and stronger testing plans."
Because the redo logs and archive logs are in the CDB level, the standby database has to be copy of the whole CDB database also. There are no separate redo logs for each PDB so you cannot replicate a single PDB only.


I suggest using at least version 12.1.0.2 because that version contains a feature to automatically open PDB's to open state when CDB is restarted.
12.1.0.2 New Features: PDB State Management Across CDB Restart

I prefer to use a separate staging area for creating a standby database. You can then first create a copy of primary database to staging area and as a separate step create a new standby database from that copy. 

Step 1: Prepare primary database

 

Create Environment file

 

It will be easier to create scripts if you have all the variables of your system in one file. When you then run your scripts, you simply run the environment file before that.
Environment variables I have used:

#ORACLE_SID of primary CDB database
PRIMARY_SID=TEST

# Directory where Standby files are. Same directory in Standby and Primary server.
STANDBYLOGS =/u02/Standby

#ORACLE_SID of standby CDB database
STANDBY_SID=TEST

#Root dir of Oracle database files (for all databases)
ORACLE_DATA_DIR=/u02/oradata

#Root dir of Oracle admin files. 
ORACLE_ADMIN_DIR=/u01/app/oracle/admin

# Primary FRA directory
PRIMARY_FRA=/u02/fast_recovery_area

Mount Standby directory to Primary database. 

 

This directory will be used for transferring copy of data files and archive logs. It is easiest to create a "Standby" directory to Standby server and mount that directory to same name and path in Primary server. Just remember not to use a directory in Primary server because you will then lose that data in server failure.

Remember to check that 'oracle' user has the same user id in both servers so that access permissions are the same for all the directories and files in the shared Standby-directory.

Set Primary Database to Archive Log mode

 

export ORACLE_SID=$PRIMARY_SID
sqlplus / as sysdba
shutdown immediate;
startup mount;
alter database archivelog;
alter database open;

 

Enable force logging

 

alter database force logging;

 

Create and edit standby init.ora


#Get pfile from primary database 
export ORACLE_SID=$PRIMARY_SID
sqlplus / as sysdbaCREATE PFILE='$STANDBYLOGS/init$PRIMARY_SID.Primary.ora' FROM SPFILE;
quit;
 
cp  $STANDBYLOGS/init$PRIMARY_SID.Primary.ora
$STANDBYLOGS/init$STANDBY_SID.Standby.ora

Edit init$STANDBY_SID.Standby.ora

Change controlfiles to standby controlfile location. Remember to change directories, you cannot use Unix variables in here. 
*.control_files='/u02/oradata/TEST/controlfile/standby1.ctl', '/u02/oradata/TEST/controlfile/standby2.ctl','/u02/oradata/TEST/controlfile/standby3.ctl'

Change FRA  to standby's FRA
*.db_recovery_file_dest='/u02/standby/fast_recovery_area'

It's easiest (IMHO) to use listener configured in tnsnames.ora. (TEST is the database sid.)
*.local_listener='LISTENER_TEST'

Configure listener alias in tnsnames.ora in primary and standby servers. Both have their own hostname in HOST setting(servername.domainname) . TEST is the database sid. This way you don't have to set hostname's in init.ora's.
LISTENER_TEST =
  (ADDRESS = (PROTOCOL = TCP)(HOST = servername.domainname)(PORT = 1521))

 

No database autostart in primary server

 

Please check there is no autostart of production database in primary server. If there's is a failure in primary server and standby is activated, there is no quick turning back to primary database. Primary database has to be recreated from currently active standby database meaning you have to stop standby, take backup and move that backup to primary database. Therefore if somehow primary server is back online, and primary database is autostarted, the users start to use it instead of standby and you end up with two databases used simultaneously that are then not in sync.

 

Step 2: Create copy of primary database

 


These commands run in primary server will create a file copy of primary database online. No need for shutdown and downtime. But this is a backup so I advice not to run this at a busy daytime hours.

#Switch log file and start backup
sqlplus / as sysdba << END
alter system archive log current;
alter system switch logfile;
alter database begin backup;
quit;
END
 

# Take backup of datafiles
# This takes a backup of all datafiles in CDB and PDBrm -rf $STANDBYLOGS$ORACLE_DATA_DIR/$PRIMARY_SID
mkdir -p $STANDBYLOGS$ORACLE_DATA_DIR/$PRIMARY_SID
cp -r $DATA_DIR/* $STANDBYLOGS$ORACLE_DATA_DIR/$PRIMARY_SID
rm -rd $STANDBYLOGS$ORACLE_DATA_DIR/$PRIMARY_SID/onlinelog
rm -rf $STANDBYLOGS
$ORACLE_DATA_DIR/$PRIMARY_SID/controlfile

# Stop backup and create standby control file
sqlplus / as sysdba << END
alter database end backup;
 
rm -rf $STANDBYLOGS/standby_$PRIMARY_SID.ctl
ALTER DATABASE CREATE STANDBY CONTROLFILE AS '$STANDBYLOGS/standby_$PRIMARY_SID.ctl';
# Create also pfile. We don't need it but it's good to have a copy in case you need to check quickly parameters. 

CREATE PFILE='$STANDBYLOGS/init$PRIMARY_SID.Primary.ora' FROM SPFILE;
alter system archive log current;
alter system switch logfile;

quit;
END

 

Step 3:Create standby database from primary database copy

 

These commands are run in standby server and they create the standby database.

If you activate standby, it creates an automatic backup. If you after that create the standby database again, you cannot start recovery when RMAN registers that old backup. The standby database has then different incarnation than primary database.Therefore when we recreate the standby, we have to remove any earlier backups from standby's FRA.

#Shutdown standby database if it exists 
export ORACLE_SID=$STANDBY_SID
lsnrctl stop listener
sqlplus / as sysdba << END
shutdown immediate;
quit;
END
 

# Create directories and copy files
rm -rf $
ORACLE_DATA_DIR/$STANDBY_SID
mkdir -p $
ORACLE_DATA_DIR/$STANDBY_SID
cp -r $STANDBYLOGS$
ORACLE_DATA_DIR/$PRIMARY_SID/* $ORACLE_DATA_DIR/$STANDBY_SID
mkdir $
ORACLE_DATA_DIR/$STANDBY_SID/onlinelog

# Copy control files
mkdir -p $
ORACLE_DATA_DIR/$STANDBY_SID/controlfile
cp $STANDBYLOGS/standby_$1.ctl $
ORACLE_DATA_DIR/$STANDBY_SID/controlfile/standby1.ctl
cp $STANDBYLOGS/standby_$1.ctl $
ORACLE_DATA_DIR/$STANDBY_SID/controlfile/standby2.ctl
cp $STANDBYLOGS/standby_$1.ctl $
ORACLE_DATA_DIR/$STANDBY_SID/controlfile/standby3.ctl
#
mkdir -p $ADMIN_DIR/$STANDBY_SID/pfile
# DB Parameter file
cp $STANDBYLOGS/init$STANDBY_SID.Standby.ora $ADMIN_DIR/$STANDBY_SID/pfile


# Remove earlier standby backups  

rm -rf $STANDBYLOGS/fast_recovery_area/$STANDBY_SID/autobackuprm -rf $STANDBYLOGS/fast_recovery_area/$STANDBY_SID/onlinelog

# Avaa kanta
sqlplus / as sysdba << END
STARTUP NOMOUNT pfile='$ADMIN_DIR/$STANDBY_SID/pfile/init$STANDBY_SID.Standby.ora';
create spfile from pfile='$ADMIN_DIR/$STANDBY_SID/pfile/init$STANDBY_SID.Standby.ora';
ALTER DATABASE MOUNT STANDBY DATABASE;
ALTER DATABASE RECOVER AUTOMATIC STANDBY DATABASE UNTIL CANCEL;
alter database recover cancel;
END

 

Step 4:Keeping up with the primary database


Copy archive logs to standby

 

In primary server you have to run in cron these commands to replicate archive logs to shared standby directory.

mkdir -p $STANDBYLOGS/fast_recovery_area/$STANDBY_SID/archivelog
rsync -aqz --delete $PRIMARY_FRA/
$PRIMARY_SID/archivelog/ $STANDBYLOGS/fast_recovery_area/$STANDBY_SID/archivelog

 

Apply Archive logs to standby

 

In standby server you have to run redo apply in cron.

# Register the archive logs found in FRA
export ORACLE_SID=$STANDBY_SID
rman << EOF
connect target /
crosscheck archivelog all;
delete noprompt expired archivelog all;
quit;
EOF
 

# Apply recover to standby
sqlplus / as sysdba > /dev/null << END
ALTER DATABASE RECOVER AUTOMATIC STANDBY DATABASE UNTIL CANCEL;
alter database recover cancel;
quit;


 

Delete used archive logs

 

You also have to delete old archive logs from primary database but that depends how you backup the primary database. I prefer to keep archive logs for a week. That way you can always create a standby database again from a week old copy and just apply the archive logs from a week. It will take a long time, but at least you dont't have to do anything heavy in primary server.

 

Step 5. Activating the standby. 


I'm not posting complete instructions of failover to standby because in real world situations at first you try to get primary back online first. Then you try to get as much archive logs out of primary server as possible and then you start to activate standby server.

In testing you can activate the standby this way:
sqlplus / AS SYSDBA << END
ALTER DATABASE ACTIVATE STANDBY DATABASE;
shutdown immediate
startup;
END

 

Conclusion

 

Using multitenant architecture with manual standby needed only minor changes in scripts, mainly in the location of datafiles. In Non-CDB database all the datafiles normally are in one /datafile-directory. In multitenant architecture the CDB datafiles are in /datafile-directory and each PDB(in this case only one) has it's own directory where /datafile -subdirectory resides.

Tuesday, October 28, 2014

Prolimit Predator Kite Harness

Like I said in Good Stuff:Kite harness Prolimit Aaron Hadlow Special I bought KiteWaist Pro to replace the old war horse. But that wasn't flexible enough for my liking and I decided I would use it only for snowkiting and maybe with drysuit in cold.

For summer kiting I then bought Prolimit Predator and have now used it for one summer.


It feels great, gives me the same flexibility than the Aaron Hadlow Special and quality seems to be the same high level. I'm pretty sure this thing will last as long as the Aaron Hadlown Special did.

Saturday, September 27, 2014

Qi charging : it just works

I bought Samsung S4 Active last spring and I didn't do my homework properly. S4 Active does not have any official Qi charging covers.

So i had to go unofficial. It is possible to insert Qi tag inside S4 Active's backcover but the tag needs to be very thin and at the same time you loose some of its water tightness. And if you loose only some of it, you loose all of it. It is not water tight anymore.So, it's you choice...

This one is slim enough to fit inside S4 Active:
S4 SlimPWRcard

It seems Qi market is still small. I had to order some parts from Germany, some from USA and some from Korea.

As a home charging station I bought LG WCP-400. It took some time to find a dealer because nobody in Europe didn't have it anymore and I had to order that one from USA and not all dealer's deliver to Europe.


I basically ordered this one for nice design and it's easy to read your phone's screen when it's tilted.

For car there was not a lot of choices. LG has a smart phone holder SMART FIT2 (TCD-C800P) for car that can be installed with charging station LG WCP-300.


In the picture above the smart phone holder has a white LG WCP-300 inserted inside. The car mount I had to order via Ebay from Korea.

Of course it was difficult to find original LG WCP-300 also so I had to order a china-copy, actually two. There's a lot of them in Amazon and I think they are all from the same factory. They fit nicely inside the car mount.

Because smart phone's of today need daily dose of electricity using wireless charging removes all that inconvenience. You just place the phone into it's stand and it starts charging.

Friday, September 12, 2014

Ebola, exponentiaalinen kasvu ja datan visualisointi

Ebola ei aiheena ole niitä kaikkein miellyttävimpiä mutta mediasta kun on tullut siitä uutisia niin käytetty kuvitus on usein ollut eksponentiaalisesti ylöspäin hyökkäävä käyrä. Kun lukumäärät sairastuneista olivat pieniä, tuntui kuitenkin samaan aikaan että eihän tässä mitään hätää.

Kun kasvukäyrä noudattaa exponentiaalista kasvua, ensimmäinen asia mikä kannattaa muistaa että kasvuvauhdin nopeus "yllättää" aina. Päivittäinen kasvuhan pysyy xamana prosentuaalisesti mutta kun se kasvu kumuloituu niin käyrän loppupää näyttää aina nousevan kohti seinää. Eksponentiaalisessa kasvussa on se piirre että jos viime päivien kasvu hämmästytti sinut, tulee seuraavien päivien kasvu hämmästyttämään sinut eksponentiaalisesti.

Tämä data on haettu Wikipediasta:
2014 West Africa Ebola virus outbreak



Tässä käyrässä on eriteltynä maittain sairastuneitten määrät. Nämä on ne käyrät mitä lehdissä on ajoittain nähty. Epidemian kulku ei kuitenkaan anna tästä täysin oikeaa kuvaa. Käyrän tarkkuus ja havainnollisuus kärsivät siitä että epidemian alussa ja nykyhetkessä lukumäärissä on liian suuri ero. Todellisuudessa emme myöskään näe tästä käyrästä täysin epidemian eksponentiaalista luonnetta.


Vaihdetaanpä käyrä näyttämään logaritmistä asteikkoa?




Logaritmisestä asteikosta näkyy selkeästi kuinka tarkkaan kasvu noudattaa eksponentiaalista käyrää. Kesäkuun puolivälistä 18.6 lähtien sairastuneitten kokonaiskäyrä näyttää viivasuoralta. Vaikka yksittäisissä maissa onkin eroja, on kokonaissumma hämmästyttävän ennustettavissa. Kuukauden päästä, 7.10 sairastuneitten määrä tulee ylittämään 10 000 ihmisen rajan.

Ei sorruta kuitenkaan pelotteluun. Maissa on eroja kuten käyristä näkee. Nigeria esimerkiksi siirtyi välittömästi hätätilaan ensimmäisten tartuntojen ilmaantuessa ja on pystynyt pitämään sairastuneiden määrän pienenä. Toisaalta omahyväisyyteenkään ei ole varaa. Guinea näytti saavan taudin aisoihin heinäkuussa mutta sen jälkeen kasvuvauhti olla ollut samaa kuin muuallakin.

Logaritmisestä käyrästäkään ei näy että kasvunopus on jopa hieman kiihtynyt. Elokuun puoliväliin asti kasvunopus oli 2,5% päivässä kun taas sen jälkeinen nopeus on ollut 3,1%. Eksponentiaalisessa kasvussa tuollainen ero on merkittävä. Kokonaiskasvunopeuden muutoksen syynä on Liberian sairastuneiden määrän nousu suhteessa nuihin ja suuri kasvunopeus 5,3%. Liberiassakin epidemia on kuitenkin asettumassa muiden maiden tasolle ja nykyinen kasvunopeus on n. 3,3% päivässä. Tuollainenkin kasvuvauhti tarkoittaa silti että määrä kuukaudessa 2,5-kertaistuu.

Varmasti WHO:lla on tarkemmat arviot epidemian kulusta mutta yksinkertaistettuna nuo ylläolevat käyrät kertovat sen että jokainen päivä tarkoittaa 3,1% nousua taudin hoidon kustannuksissa.

Sunday, August 17, 2014

Exponential growth and Ebola

While the subject is grim Ebola infections show clearly how exponential growth works. Once you see it it is easy to predict what will happen and what should be done.

This data is from Wikipedia:
2014 West Africa Ebola virus outbreak


 Here's the growth curve you normally see in media. The growth is clearly exponential and the end of curve, the current situation always shows like it's skyrocketing. (Well, it is.) The situation in Guinea and Sierra Leone show linear growth which is better. Liberia is quickly catching up Sierra Leone.

But what happens when you switch to logarithmic scale?

From logarithmic scale you see the growth rate is straight line from 18.6 to 13.8. That is because the growth rate of ebola is exponential. The daily growth rate from 18.6 till 13.8 is 2,5% meaning ebola infections increase threefold in one month.It also means that every day the western countries wait in giving help means the next day 2,5% more help is needed.

Update: Here's the graph updated with latest information 26.8
The growth rate is still a straight line.