How to generate self-signed X.509 certificates for local development (Nginx)

This days, security and encryption is one of the most important topics and that’s why we need to have certain amount of knowledge about commonly used methods. TLS/SSL works by using a combination of a public certificate and a private key. The SSL key is kept secret on the server. It is used to encrypt content sent to clients. The SSL certificate is publicly shared with anyone requesting the content. It can be used to decrypt the content signed by the associated SSL key.

We can create a self-signed key and certificate pair with OpenSSL in a single command. However, new browsers complain about self signed certificates, so we need some additional configurations for them. We’ll create only one configuration file to keep steps as much as minimum.

First create a file named nginx-selfsigned.conf with the content of:

default_keyfile    = /etc/ssl/private/nginx-selfsigned.key
distinguished_name = req_distinguished_name
prompt             = no
x509_extensions    = v3_ca

C            = DE
ST           = Berlin
L            = Berlin
O            = MYORG
OU           = MYTEAM
CN           =
emailAddress =

subjectKeyIdentifier   = hash
authorityKeyIdentifier = keyid
basicConstraints       = CA:true
subjectAltName         = @alt_names

DNS.1 =

Let’s talk about this configuration file in an abstract way. Normally, creating certificate requires few steps like, creating key, requesting intermediate certificate, signing and etc. Since we’ll create self signed certificate in single command, we just created a section for request parameters. In here, we defined the key file location to be created. During certificate request, there is few questions to identify signer. In req_distinguished_name we answer these questions and in third line we create a reference to this section. Then we disabled the prompt questions since we already answered them in second section. New browsers like Chrome requires subjectAltName information to validate certificate. We created a new section for those extensions and specify some necessary attributes over there. You can search the internet for detailed descriptions of attributes. However, be cautious with the Common Name (CN) field, which should be the exact Fully Qualified Domain Name (FQDN) or IP address of the host that you intend to use the certificate with.

After creating configuration file we run this command:

sudo openssl req -x509 -nodes -sha256 -days 1024 -newkey rsa:2048 -config nginx-selfsigned.conf -out /usr/local/share/ca-certificates/nginx-selfsigned.crt

It is basically saying that we want to create a x509 certificate with a key file valid for 1024 days using our configuration file. After this command runs successfully, you should be able to see certificate in /usr/local/share/ca-certificates/nginx-selfsigned.crt. At this step, we actually created the certificate but we need additional steps to use it properly.

In order to OpenSSL to find the certificate, it needs to be looked up as its hash. Normally, you would create a symbolic link for a meaningful name of the CA to the hash value, rather than renaming the CA certificate. Ideally, create a symbolic link (or hard link if you must, but symbolic ones usually make spotting which hash is which certificate name that bit easier). The symbolic link must be for the hashed value above plus “.0” – if you forget the .0 then OpenSSL won’t detect it, and you’ll get lots of errors. But, to make it more with Linux utilities, you can use below command (assuming crt file in /usr/local/share/ca-certificates directory, otherwise you should use command in comment):

sudo update-ca-certificates
### sudo ln -s /usr/local/share/ca-certificates/nginx-selfsigned.crt /etc/ssl/certs/`openssl x509 -hash -noout -in /usr/local/share/ca-certificates/nginx-selfsigned.crt`.0 #backup if above command not working

We’re now to test this installation. To do so, we really want a certificate that’s been signed by the newly installed CA. Failing this, you can use the CA certificate, but this won’t always cause all the possible errors to show up. Run:

openssl verify /usr/local/share/ca-certificates/nginx-selfsigned.crt

If you’ve got it correct, you should see something like:

/usr/local/share/ca-certificates/nginx-selfsigned.crt: OK

Which tells you that your CA certificate is correctly installed. If you want to see how certificate looks like, you can run:

openssl x509 -in /usr/local/share/ca-certificates/nginx-selfsigned.crt -text -noout

Create a Configuration Snippet Pointing to the SSL Key and Certificate For Nginx

First, let’s create a new Nginx configuration snippet in the /etc/nginx/snippets directory. Within this file, we just need to set the ssl_certificate directive to our certificate file and the ssl_certificate_key to the associated key.

sudo vi /etc/nginx/snippets/self-signed.conf

In our case, this will look like this:

ssl_certificate /etc/ssl/certs/nginx-selfsigned.pem;
ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;

Adjust the Nginx Configuration to Use SSL

In default, Nginx configuration file is at /etc/nginx/sites-available/default. Let’s open this file:

sudo vi /etc/nginx/sites-available/default

Comment out necessary lines and add new snippet. At the end of editing, below lines should be in configuration:

listen 443 ssl;
listen [::]:443 ssl default_server;
include snippets/self-signed.conf;

Now that we’ve made our changes, we can restart Nginx to apply our new changes.
First, we should make sure that there are no syntax errors in our files. We can do this by typing:

sudo nginx -t

If everything is successful, you should see:

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

If your output matches the above, your configuration file has no syntax errors. We can safely restart Nginx to implement our changes:

sudo systemctl restart nginx

For testing, you can redirect the https domain to your localhost by adding the following to /etc/hosts:

Test that TLS is working as expected (especially the validation):

openssl s_client -connect

The output should look like this at the beginning:

depth=0 C = DE, ST = Berlin, L = Berlin, O = MYORG, OU = MYTEAM, CN =, emailAddress =
verify return:1
Certificate chain
 0 s:/C=DE/ST=Berlin/L=Berlin/O=MYORG/OU=MYTEAM/
Server certificate

To use this certificate in Chrome you should add your certificate to trusted directory:

certutil -d sql:$HOME/.pki/nssdb -A -t "C,," -n "My Homemade CA" -i /etc/ssl/certs/nginx-selfsigned.pem

And check it is exist:

certutil -d sql:$HOME/.pki/nssdb -L

Developers Rock!!!

Posted in Linux | Tagged , , , , , , , , , | Leave a comment

Gpg – can’t connect to `/home/${USER)/.gnupg/S.gpg-agent’: No such file or directory

Recently, I got below error while I was trying to create new gpg key:

can’t connect to `/home/aykut/.gnupg/S.gpg-agent’: No such file or directory
gpg-agent[27029]: command get_passphrase failed: Operation cancelled
gpg: cancelled by user
gpg: Key generation canceled.

One can think that the problem is gpg-agent is not working. Yes, that might be the case in your situation. So, you can first run below command and see what is going to happen:

gpg-agent --daemon

If the result says that some files are created and agent started to run now, I suggest you to try creating your gpg key again. Otherwise, if message says that the agent is already running, like in my case, probably issue is not agent related. In my case, I login to server as different user and then changed it with su command. When using pinentry, you must have the proper permissions of the terminal device (e.g. /dev/tty1) in use. However, with su (or sudo), the ownership stays with the original user, not the new one. This means that pinentry will fail, even as root. As a workaround, running the command with script worked:

script -q -c "gpg --gen-key" /dev/null

Also some suggested that changing device to second user also working, but I haven’t tried that:

chown ${TARGET_USER} /dev/ttyN  # where N is the current tty (login user)
su ${TARGET_USER} # (login user)
gpg --gen-key # (new user)
exit # (new user)
chown ${LOGIN_USER} /dev/ttyN  # change back to old user (login user)

Developers Rock!!!

Posted in Linux, Shell Script | Tagged , , , , , , , | Leave a comment

Oracle Null Safe Comparison (Spoiler alert: SYS_OP_MAP_NONNULL)

Even most experienced programmers are caught null comparison trap time to time. Let’s say we have two different customer tables and our business need is to find the customers that has the same first name, middle name and last name in both table. And let’s say we can join the tables on their respective id columns.


When we run the below query:

select, t1.first_name, t1.middle_name, t1.last_name
from customer_data_1 t1
join customer_data_2 t2 on =
where t1.first_name = t2.first_name
and t1.middle_name = t2.middle_name
and t1.last_name = t2.last_name;

Result going to be like this:


As you can see, we’ve missed the customers 4 and 6 even though their first name’s and last name’s are equal. To solve this issue, one way is to use nvl function and use a value that can not be exist in both table:

select, t1.first_name, t1.middle_name, t1.last_name
from customer_data_1 t1
join customer_data_2 t2 on =
where t1.first_name = t2.first_name
and nvl(t1.middle_name, 'XXXXX') = nvl(t2.middle_name, 'XXXXX')
and t1.last_name = t2.last_name;

With the help of nvl function we are able to retrieve those missing 2 columns:


However, sometimes we can not be sure that this value going to be exist or not. Another method to solve this problem is to use undocumented function SYS_OP_MAP_NONNULL. This function converts values to byte code and for null it generates FFFF. Let’s run the same query using this function:

select, t1.first_name, t1.middle_name, t1.last_name
from customer_data_1 t1
join customer_data_2 t2 on =
where t1.first_name = t2.first_name
and SYS_OP_MAP_NONNULL(t1.middle_name) = SYS_OP_MAP_NONNULL(t2.middle_name)
and t1.last_name = t2.last_name;

And the result is null safe comparison:


As you can see, we can avoid null trap using undocumented SYS_OP_MAP_NONNULL function.

Developers Rock!!!

Posted in Oracle | Tagged , , , , , , | Leave a comment

Git – Moving folders with history and all related branches

In my case, I had to move 2 different subfolders with their histories and all effected branches to the new bare repository. I had a folder structure like below:

old repo
|___ folder1
|___ folder2
|___ folder3
| |___ folder3_1
| |___ folder3_2
| |___ folder3_3
| |___ folder3_4
|___ folder4

And what I need was in my new bare repository:

new repo
|___ folder3_2
|___ folder3_4

To achieve this task, I cloned my old repository content like it’s my new repository. With this way, I don’t need to move files from one folder to another folder. If you afraid to break something in old repository, don’t! As long as you don’t force a git push, you are safe.

git clone new-repo && cd new-repo

Then I removed everything and edit git history except the directories I want to keep:

git filter-branch --index-filter 'git rm --cached -qr --ignore-unmatch -- . && git reset -q $GIT_COMMIT -- folder3/folder3_2 folder3/folder3_4' --prune-empty -- --all

Above command is the most important one to understand. Because it is where the magic happens. We used filter-branch option to rewrite our git history. Inside the quotes, we deleted everything in the repository and then we just reset the folders we wanted keep. $GIT_COMMIT is a variable that can be used in filter-branch command. And don’t forget to change folder3/folder3_2 folder3/folder3_4 part for the directories you want to keep. Also, I want to mention –all and –prune-empty arguments. Thanks to –all argument, we are able to filter all the branches, not only the checked out one. And –prune-empty helps us to eliminate empty commits in these branches.

After this command succeeded, my folder structure looks like this:

new repo
|___ folder3
|___ folder3_2
|___ folder3_4

As you can see, I need another step to achieve desired folder structure. Basically, I’m gonna use the same logic but this time with different filter option.

git filter-branch -f --subdirectory-filter folder3 --prune-empty -- --all

This time I want to mention -f option. When we first run the filter-branch command, we actually did very dangerous thing: we rewrote the git history. Because of this, git created a backup in case that we want to rollback from that command. When we run this command in second time, if we don’t force it, command is going to fail because there is already one backup file from last command run. With forcing, we bypassed this error. One other option is to delete this backup file manually.

After last command, I managed to create my desired folder structure.

new repo
|___ folder3_2
|___ folder3_4

As a last step, I need to push my branches to new repository. But, if I change my remote repository address to new one right now, I will have detached head branches for all the existing ones. That is why we need to first checkout each branches and create a local copy on our new repository.

for remote in `git branch -r | grep -v master`; do git checkout --track $remote; done

Now we can change our remote repository and push:

git remote rm origin
git remote add origin
git push --all

–all argument in push command helps us to push all branches in one command. Now if you want you can remove unnecessary branches from your local computer.

Note: If you get an error during last push, you’ve probably initialized repository with some files. To solve this, you need to merge or force the push.

Developers Rock!!!

Posted in Git | Tagged , , , , , | Leave a comment

Installation of Oracle SQL Developer to Ubuntu

First of all, you need to install Java in your environment. I will continue with the latest OpenJDK version currently available. Easiest method to install jdk is to write following commands to the terminal:

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install openjdk-8-jdk
[/sourcecode ]

Then you can download Oracle SQL Developer from <a href="" target="_blank">official website</a>. You should choose the option for other platforms. After download has completed, we need to unzip the files in a suitable place. Considering that the most of the third party applications usually installed in <code>/opt</code>, for the rest of the operations I will use this directory. However, if you want to extract the zip content somewhere else, there is no restriction for that. Also, don't forget to give execution rights to run it from console easily.

cd ~/Downloads #go to downloads
sudo unzip sqldeveloper-* -d /opt/ #unzip sqldeveloper to /opt directory
sudo chmod +x /opt/sqldeveloper/ #give execution rights
[/sourcecode ]

Now, everything is ready to run Oracle SQL Developer. You can run it calling sh file:

[/sourcecode ]

When you run SQL Developer for the first time, you need to specify the path of JDK's folder. In default installation it is under: <code>/usr/lib/jvm/java-8-openjdk-amd64/</code>

For convenience and not to write full path to the console every time, it would be handy if we put caller script for to user's local directory (Note: symbolic link doesn't work since contains <code>`dirname $0`</code>). I will use gedit to create file since it is already in the operating system. If you want, you can use other text editors like sublime text, vim etc. 

sudo gedit /usr/local/bin/sqldeveloper

And put below content inside of the file and close it with saving.


After you saved, you should also give execution rights to this file.

sudo chmod +x /usr/local/bin/sqldeveloper

From now on, you are able to run SQL Developer just writing sqldeveloper to command line. But if you want to run it like it is a desktop application, you should create a desktop entry like below:

sudo gedit /usr/share/applications/sqldeveloper.desktop

And add these lines to the file and save it:

[Desktop Entry]
Name=Oracle SQL Developer

Then you should update the desktop entries:

sudo update-desktop-database

Now, you can find SQL Developer using search bar in the left. Don’t forget to change Exec part of the desktop entry if needed. For example, if you haven’t created wrapper execution in /usr/local/bin/, you should write full path like /opt/sqldeveloper/

If you have tns entries in your Oracle Home directory and want to see those entries in SqlDeveloper, you need to export ORACLE_HOME as an environment variable. There are couple of ways to do that and you can choose any of them. If you think ORACLE_HOME variable only going to be used with SQL Developer, you can edit using your favourite text editor. You need to add below lines before the line that runs SQL Developer(Assuming your client version is 12.2 and 64bit version. Otherwise you should make necessary changes):

export ORACLE_HOME=/usr/lib/oracle/12.2/client(64)

In the end, your /opt/sqldeveloper/ should look like:

export ORACLE_HOME=/usr/lib/oracle/12.2/client(64)
cd "`dirname $0`"/sqldeveloper/bin && bash sqldeveloper $*

Another way to add this environment variable is to add this export statement to your profile and bashrc files. To do that, you should edit the files in your home directory:

sudo gedit ~/.profile
sudo gedit ~/.bashrc

and add export ORACLE_HOME=/usr/lib/oracle/12.2/client(64) at the end of file. To make it active in current terminal window, you should run below commands:

source ~/.bashrc
source ~/.profile

As a last method, we can create a new script in our profile directory to be sure that ORACLE_HOME variable exported to environment during the start of system. Basically, we should create a file like:

sudo gedit /etc/profile.d/

and add export ORACLE_HOME=/usr/lib/oracle/12.2/client(64) then save it. After that, you can test it with restarting computer.

Note: If you are using older version of Ubuntu, you may need to unset GNOME_DESKTOP_SESSION_ID. To do that, you should edit shell script content.

sudo gedit /opt/sqldeveloper/

And add:


command before running SQL Developer.

Developers Rock!!!

Posted in Linux, Oracle, Shell Script | Tagged , , , , | Leave a comment

Getting result from database in shell script

In shell script, there is no standard way of connecting and retrieving result from database. But, for Oracle we can use SQLPlus to connect and run operations on database. I tried my codes on bash for this post and codes may change according to different shells.

My first example will basically cover how to run a stored procedure from shell script and checking if the procedure finished successfully or not. I will create a basic hello world procedure on HR schema and edit it during this post. Here is a procedure with one varchar2 parameter:

CREATE OR REPLACE PROCEDURE HR.PRC_HELLO_WORLD(piv_name varchar2 default 'World')

  dbms_output.put_line('Hello ' || piv_name);


After I compiled this procedure, I created a shell script called ‘’ and give execute permission to my user. Here is the simple shell script file that calls procedure and check the run was successful or not.

echo "Script started"
echo ""

sqlplus -s hr/hr@localhost:1521/orcl << end_sql
exec hr.prc_hello_world;
exit 0;

if [ $? = 0 ]
  echo ""
  echo "Yayy, It worked :)"
  echo ""
  echo ""
  echo "Hmm, Something wrong happened :("
  echo ""

SQLPlus has a lot of system variable that you can set. For example, we set ‘SERVEROUTPUT ON’ to catch standard output messages. In this link, you can find any other variables that may help you. When I run the script I got below output:

Shell Script Successful Result

Later that, I changed PRC_HELLO_WORLD procedure to throw exception after the output. With that way, my SQLPlus connection will exit with value 1 and I will understand something went wrong with using this value. For this example, I also add a parameter to my procedure call via shell script variable to show you how parameters can be used.

CREATE OR REPLACE PROCEDURE HR.PRC_HELLO_WORLD(piv_name varchar2 default 'World')

  dbms_output.put_line('Hello ' || piv_name);

  RAISE_APPLICATION_ERROR (-20001, 'Nicely prepared exception');

echo "Script started"
echo ""


sqlplus -s hr/hr@localhost:1521/orcl << end_sql
exec hr.prc_hello_world('$v_my_name');
exit 0;

if [ $? = 0 ]
  echo ""
  echo "Yayy, It worked :)"
  echo ""
  echo ""
  echo "Hmm, Something wrong happened :("
  echo ""

And the result is changed like:

Shell Script Error Result

Well, this may cover most of the cases but what if we want to return a status code from database? Can we use the exit part on the SQLPlus? Yes, we can. To do that, I added an out parameter to my procedure and change the body to response for different cases.

CREATE OR REPLACE PROCEDURE HR.PRC_HELLO_WORLD(piv_name in varchar2 default 'World', pon_rc out number)

  IF upper(piv_name) = 'AYKUT'
    dbms_output.put_line('Hello Master ' || piv_name);
    pon_rc := 1;
  ELSIF upper(piv_name) = 'DAVID'
    RAISE_APPLICATION_ERROR(-20001, 'You are not welcome here');
    dbms_output.put_line('Hello ' || piv_name);
    pon_rc := 2;

    pon_rc := 0;

Then, I changed shell script to get name parameter from console and add an out parameter to my procedure call. I also changed the output control if statement and make it just show the output parameter.

echo "Script started"
echo ""


sqlplus -s hr/hr@localhost:1521/orcl << end_sql
variable rc NUMBER;
exec hr.prc_hello_world('$v_my_name', :rc);
exit :rc;

echo "Your out parameter is: $?"

With these changes, I made three different call to my script:

Return Value 1

Return Value 2

Return Value 3

Well, we have done good so far. And there is one more topic that I want to mention in this post, and it is how you can use database table as a parameter for shell script variables. I don’t know that the following way is the best practice for this but, in my recent project we have created a parametric shell script that changes according to values on the database table using this method. The core technique is actually using standard output for parameters and parsing the values on shell script.

Let’s say, we need to copy files, that have some kind of pattern, from some source folders to different target folders for processing and the source path and target path are different from each other. We want to decide which file to where to put using ‘datatype’. Considering these, we can create a parameter table like below and fill it with values for example:


INSERT INTO HR.SHELL_SCRIPT_PARAMS (DATATYPE, SOURCE_DIR, TARGET_DIR, FILE_PATTERN) VALUES ('filetype1', '/home/source/type1', '/home/target/type1', 'file1*.txt');
INSERT INTO HR.SHELL_SCRIPT_PARAMS (DATATYPE, SOURCE_DIR, TARGET_DIR, FILE_PATTERN) VALUES ('filetype2', '/home/source/type2', '/home/target/type2', 'file2*.txt');
INSERT INTO HR.SHELL_SCRIPT_PARAMS (DATATYPE, SOURCE_DIR, TARGET_DIR, FILE_PATTERN) VALUES ('filetype3', '/home/source/type3', '/home/target/type3', 'file3*.txt');
INSERT INTO HR.SHELL_SCRIPT_PARAMS (DATATYPE, SOURCE_DIR, TARGET_DIR, FILE_PATTERN) VALUES ('filetype4', '/home/source/type4', '/home/target/type4', 'file4*.txt');


Then, we can create a procedure that write necessary parameters to standard output for a given datatype. Optionally, I added a delimiter parameter to the procedure to manage delimiter only in one place. For best practice, exception must be logged in exception block, but for this example I skipped this.

    v_params VARCHAR2(4000);
    SELECT source_dir || piv_delimiter 
        || target_dir|| piv_delimiter
        || file_pattern
    INTO v_params
    WHERE DATATYPE = piv_datatype;


Then I changed shell script to call this procedure with given datatype and specified delimiter. I also added a lot of SQLPlus parameter to manage standard output nicely. As an example, I didn’t add copy part to below script. I just echo variables to console.

echo "Script started"
echo ""


  params=`sqlplus -s hr/hr@localhost:1521/orcl << end_sql
exec hr.prc_shell_script_params('$DATATYPE', '$DELIMITER');
exit 0;

if [ $? != 0 ]
  echo ""
  echo "Failed to load parameters from database"
  echo ""
  exit -1

SOURCE_DIR=`echo $params | cut -d$DELIMITER -f1`
TARGET_DIR=`echo $params | cut -d$DELIMITER -f2`
FILE_PATTERN=`echo $params | cut -d$DELIMITER -f3`

echo ""
echo ""

After the changes, I made two different call for one of them successful and the other is unsuccessful. Results are on below:

Shell Script Success Result

Shell Script Error Result

Developers Rock!!!

Posted in Linux, Oracle, Shell Script | Tagged , , , , | Leave a comment

MySql UTF8 character set

MySQL database has two implementations for utf8 character set:

  • utf8 (As of MySQL 4.1)
  • utf8mb4 (As of MySQL 5.5)

In this writing, I am going to explain the difference between these two character set and how you can store data with utf8mb4 character set in your MySQL database.

The key point is, utf8 uses a maximum of 3 bytes per character and the utf8mb4 character set uses a maximum of 4 bytes per character. For this reason, utf8mb4 can store additional characters that cannot be stored by utf8mb3 (alias for utf8) character set.

To be consistent, I suggest that all of the below items have utf8mb4 character set if you want to store data properly. I haven’t try that but, you could loose data or maybe get an exception during dml statement if one of the items is not properly set to utf8mb4.

  • Server Character Set
  • Database Character Set
  • Schema Character Set
  • Table Character Set
  • Column Character Set

Now, I am going to explain how you can change each items’ character set value and how you can check the values of them. I did my experimentation on MySQL 5.7 and there could be additional or different steps for other versions. The configurations are inherited from each other and it should be enough to satisfy most generic condition. If you encounter a problem, you can check configurations from top to the bottom to be sure everything is as expected.

Server Character Set:

While MySQL database standing up, it reads necessary properties from different configuration files. You can find related information from this link. For my Ubuntu environment, I choose to change ‘/etc/mysql/my.cnf’ file and for my Windows environment I choose to change ‘C:/ProgramData/MySQL/MySQL Server 5.7/my.ini’ file. But, be careful while changing files. Because, if you define same property twice or remove a required property, your database will not stand up. Suspect from your conf files and investigate deeper your changes if any unexpected behavior occurs. Here is the configurations:




In MySQL database, there isn’t any different configuration parameter to configure default-character-set option for utf8mb4 different than utf8. As you can see, we change character-set-server to ‘utf8mb4’ and collation-server to ‘utf8mb4_general_ci’. Collation affects the order of characters when you need ordering. Further reading about collations can be found from this link. After you change the files, MySQL Server must be restarted. Below commands may change according to service names and for Windows you may need to open cmd as an administrator.

On Linux:

sudo service mysql restart

On Windows:

net stop MySQL57
net start MySQL57

When MySQL server started, you can connect to database and run below command to check if your changes works:

show variables
where variable_name like '%char%';

You must see character set of server similar to below:
MySQL Character Set

Database Character Set:
Database character set can be specified when you are creating the database or you can alter the database after you created it.

During create phase below command will be enough:

CREATE DATABASE your_database_name CHARACTER SET utf8mb4;

For an existing database, you can change character set using below command:

ALTER DATABASE your_database_name CHARACTER SET = utf8mb4 COLLATE = utf8mb4_unicode_ci;

You can control the changes with the same query as server character set:

show variables
where variable_name like '%char%';

Database Character Set

Schema Character Set:
If you already executed above steps, you do not need to do anything specific for the rest. But just for the migration issues, I will explain how to alter and check the status of remaining database objects.

For an existing schema:

ALTER SCHEMA your_schema_name CHARACTER SET = utf8mb4 COLLATE = utf8mb4_unicode_ci;

You can check the changes with below query:

select *
from information_schema.schemata 
where schema_name = "your_schema_name";

Table Character Set:

For an existing table:

ALTER TABLE your_schema_name.your_table_name CHARACTER SET = utf8mb4 COLLATE = utf8mb4_unicode_ci;

You can control the changes with below query:

select *
from information_schema.tables t,
     information_schema.collation_character_set_applicability ccsa
where ccsa.collation_name = t.table_collation
and   t.table_schema = "your_schema_name"
and   t.table_name = "your_table_name";

Columns Character Set:

For an existing column:

ALTER TABLE your_schema_name.your_table_name MODIFY your_column_name VARCHAR(4000) CHARACTER SET utf8mb4;

You can control the changes with below query:

select *
from information_schema.columns 
where table_schema = "your_schema_name"
and   table_name = "your_table_name"
and   column_name = "your_column_name";

Connection String of Application:
If you want connection of your application to use utf8, you could use below connection string in your Java application. I think same approach can be implemented for other languages.


Developers Rock!!!

Posted in Java, MySql | Tagged , , , , , , , , , | Leave a comment