Git – Moving folders with history and all related branches

In my case, I had to move 2 different subfolders with their histories and all effected branches to the new bare repository. I had a folder structure like below:

old repo
|___ folder1
|___ folder2
|___ folder3
| |___ folder3_1
| |___ folder3_2
| |___ folder3_3
| |___ folder3_4
|
|___ folder4

And what I need was in my new bare repository:

new repo
|___ folder3_2
|___ folder3_4

To achieve this task, I cloned my old repository content like it’s my new repository. With this way, I don’t need to move files from one folder to another folder. If you afraid to break something in old repository, don’t! As long as you don’t force a git push, you are safe.

git clone git@github.com:aakin/old-repo.git new-repo && cd new-repo

Then I removed everything and edit git history except the directories I want to keep:

git filter-branch --index-filter 'git rm --cached -qr --ignore-unmatch -- . && git reset -q $GIT_COMMIT -- folder3/folder3_2 folder3/folder3_4' --prune-empty -- --all

Above command is the most important one to understand. Because it is where the magic happens. We used filter-branch option to rewrite our git history. Inside the quotes, we deleted everything in the repository and then we just reset the folders we wanted keep. $GIT_COMMIT is a variable that can be used in filter-branch command. And don’t forget to change folder3/folder3_2 folder3/folder3_4 part for the directories you want to keep. Also, I want to mention –all and –prune-empty arguments. Thanks to –all argument, we are able to filter all the branches, not only the checked out one. And –prune-empty helps us to eliminate empty commits in these branches.

After this command succeeded, my folder structure looks like this:

new repo
|___ folder3
|___ folder3_2
|___ folder3_4

As you can see, I need another step to achieve desired folder structure. Basically, I’m gonna use the same logic but this time with different filter option.

git filter-branch -f --subdirectory-filter folder3 --prune-empty -- --all

This time I want to mention -f option. When we first run the filter-branch command, we actually did very dangerous thing: we rewrote the git history. Because of this, git created a backup in case that we want to rollback from that command. When we run this command in second time, if we don’t force it, command is going to fail because there is already one backup file from last command run. With forcing, we bypassed this error. One other option is to delete this backup file manually.

After last command, I managed to create my desired folder structure.

new repo
|___ folder3_2
|___ folder3_4

As a last step, I need to push my branches to new repository. But, if I change my remote repository address to new one right now, I will have detached head branches for all the existing ones. That is why we need to first checkout each branches and create a local copy on our new repository.

for remote in `git branch -r | grep -v master`; do git checkout --track $remote; done

Now we can change our remote repository and push:

git remote rm origin
git remote add origin git@github.com:aakin/new-repo.git
git push --all

–all argument in push command helps us to push all branches in one command. Now if you want you can remove unnecessary branches from your local computer.

Note: If you get an error during last push, you’ve probably initialized repository with some files. To solve this, you need to merge or force the push.

Developers Rock!!!

Posted in Git | Leave a comment

Installation of Oracle SQL Developer to Ubuntu

First of all, you need to install Java in your environment. I will continue with the latest OpenJDK version currently available. Easiest method to install jdk is to write following commands to the terminal:

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install openjdk-8-jdk

Then you can download Oracle SQL Developer from official website. You should choose the option for other platforms. After download has completed, we need to unzip the files in a suitable place. Considering that the most of the third party applications usually installed in /opt, for the rest of the operations I will use this directory. However, if you want to extract the zip content somewhere else, there is no restriction for that. Also, don’t forget to give execution rights to run it from console easily.

cd ~/Downloads #go to downloads
sudo unzip sqldeveloper-*-no-jre.zip -d /opt/ #unzip sqldeveloper to /opt directory
sudo chmod +x /opt/sqldeveloper/sqldeveloper.sh #give execution rights

Now, everything is ready to run Oracle SQL Developer. You can run it calling sh file:

/opt/sqldeveloper/sqldeveloper.sh

When you run SQL Developer for the first time, you need to specify the path of JDK’s folder. In default installation it is under: /usr/lib/jvm/java-8-openjdk-amd64/

For convenience and not to write full path to the console every time, it would be handy if we put caller script for sqldeveloper.sh to user’s local directory (Note: symbolic link doesn’t work since sqldeveloper.sh contains `dirname $0`). I will use gedit to create file since it is already in the operating system. If you want, you can use other text editors like sublime text, vim etc.

sudo gedit /usr/local/bin/sqldeveloper

And put below content inside of the file and close it with saving.

/opt/sqldeveloper/sqldeveloper.sh

After you saved, you should also give execution rights to this file.

sudo chmod +x /usr/local/bin/sqldeveloper

From now on, you are able to run SQL Developer just writing sqldeveloper to command line. But if you want to run it like it is a desktop application, you should create a desktop entry like below:

sudo gedit /usr/share/applications/sqldeveloper.desktop

And add these lines to the file and save it:

[Desktop Entry]
Exec=sqldeveloper
Terminal=false
StartupNotify=true
Categories=GNOME;Oracle;
Type=Application
Icon=/opt/sqldeveloper/icon.png
Name=Oracle SQL Developer

Then you should update the desktop entries:

sudo update-desktop-database

Now, you can find SQL Developer using search bar in the left. Don’t forget to change Exec part of the desktop entry if needed. For example, if you haven’t created wrapper execution in /usr/local/bin/, you should write full path like /opt/sqldeveloper/sqldeveloper.sh.

If you have tns entries in your Oracle Home directory and want to see those entries in SqlDeveloper, you need to export ORACLE_HOME as an environment variable. There are couple of ways to do that and you can choose any of them. If you think ORACLE_HOME variable only going to be used with SQL Developer, you can edit sqldeveloper.sh using your favourite text editor. You need to add below lines before the line that runs SQL Developer(Assuming your client version is 12.2 and 64bit version. Otherwise you should make necessary changes):

export ORACLE_HOME=/usr/lib/oracle/12.2/client(64)

In the end, your /opt/sqldeveloper/sqldeveloper.sh should look like:

#!/bin/bash
export ORACLE_HOME=/usr/lib/oracle/12.2/client(64)
cd "`dirname $0`"/sqldeveloper/bin && bash sqldeveloper $*

Another way to add this environment variable is to add this export statement to your profile and bashrc files. To do that, you should edit the files in your home directory:

sudo gedit ~/.profile
sudo gedit ~/.bashrc

and add export ORACLE_HOME=/usr/lib/oracle/12.2/client(64) at the end of file. To make it active in current terminal window, you should run below commands:

source ~/.bashrc
source ~/.profile

As a last method, we can create a new script in our profile directory to be sure that ORACLE_HOME variable exported to environment during the start of system. Basically, we should create a file like:

sudo gedit /etc/profile.d/oracle.sh

and add export ORACLE_HOME=/usr/lib/oracle/12.2/client(64) then save it. After that, you can test it with restarting computer.

Note: If you are using older version of Ubuntu, you may need to unset GNOME_DESKTOP_SESSION_ID. To do that, you should edit shell script content.

sudo gedit /opt/sqldeveloper/sqldeveloper.sh

And add:

unset -v GNOME_DESKTOP_SESSION_ID

command before running SQL Developer.

Developers Rock!!!

Posted in Linux, Oracle, Shell Script | Tagged , , , , | Leave a comment

Getting result from database in shell script

In shell script, there is no standard way of connecting and retrieving result from database. But, for Oracle we can use SQLPlus to connect and run operations on database. I tried my codes on bash for this post and codes may change according to different shells.

My first example will basically cover how to run a stored procedure from shell script and checking if the procedure finished successfully or not. I will create a basic hello world procedure on HR schema and edit it during this post. Here is a procedure with one varchar2 parameter:

CREATE OR REPLACE PROCEDURE HR.PRC_HELLO_WORLD(piv_name varchar2 default 'World')
AS 
BEGIN

  dbms_output.put_line('Hello ' || piv_name);

END PRC_HELLO_WORLD;

After I compiled this procedure, I created a shell script called ‘call_procedure.sh’ and give execute permission to my user. Here is the simple shell script file that calls procedure and check the run was successful or not.

#!/bin/bash
echo "Script started"
echo ""

sqlplus -s hr/hr@localhost:1521/orcl << end_sql
WHENEVER SQLERROR EXIT 1 ROLLBACK
SET SERVEROUTPUT ON
exec hr.prc_hello_world;
exit 0;
end_sql

if [ $? = 0 ]
then
  echo ""
  echo "Yayy, It worked :)"
  echo ""
else
  echo ""
  echo "Hmm, Something wrong happened :("
  echo ""
fi

SQLPlus has a lot of system variable that you can set. For example, we set ‘SERVEROUTPUT ON’ to catch standard output messages. In this link, you can find any other variables that may help you. When I run the script I got below output:

Shell Script Successful Result

Later that, I changed PRC_HELLO_WORLD procedure to throw exception after the output. With that way, my SQLPlus connection will exit with value 1 and I will understand something went wrong with using this value. For this example, I also add a parameter to my procedure call via shell script variable to show you how parameters can be used.

CREATE OR REPLACE PROCEDURE HR.PRC_HELLO_WORLD(piv_name varchar2 default 'World')
AS 
BEGIN

  dbms_output.put_line('Hello ' || piv_name);

  RAISE_APPLICATION_ERROR (-20001, 'Nicely prepared exception');

END PRC_HELLO_WORLD;
#!/bin/bash
echo "Script started"
echo ""

v_my_name="Aykut"

sqlplus -s hr/hr@localhost:1521/orcl << end_sql
WHENEVER SQLERROR EXIT 1 ROLLBACK
SET SERVEROUTPUT ON
exec hr.prc_hello_world('$v_my_name');
exit 0;
end_sql

if [ $? = 0 ]
then
  echo ""
  echo "Yayy, It worked :)"
  echo ""
else
  echo ""
  echo "Hmm, Something wrong happened :("
  echo ""
fi

And the result is changed like:

Shell Script Error Result

Well, this may cover most of the cases but what if we want to return a status code from database? Can we use the exit part on the SQLPlus? Yes, we can. To do that, I added an out parameter to my procedure and change the body to response for different cases.

CREATE OR REPLACE PROCEDURE HR.PRC_HELLO_WORLD(piv_name in varchar2 default 'World', pon_rc out number)
AS 
BEGIN

  IF upper(piv_name) = 'AYKUT'
  THEN
    dbms_output.put_line('Hello Master ' || piv_name);
    pon_rc := 1;
  ELSIF upper(piv_name) = 'DAVID'
  THEN
    RAISE_APPLICATION_ERROR(-20001, 'You are not welcome here');
  ELSE
    dbms_output.put_line('Hello ' || piv_name);
    pon_rc := 2;
  END IF;

EXCEPTION
  WHEN OTHERS
  THEN
    pon_rc := 0;
END PRC_HELLO_WORLD;

Then, I changed shell script to get name parameter from console and add an out parameter to my procedure call. I also changed the output control if statement and make it just show the output parameter.

#!/bin/bash
echo "Script started"
echo ""

v_my_name=$1

sqlplus -s hr/hr@localhost:1521/orcl << end_sql
WHENEVER SQLERROR EXIT 1 ROLLBACK
SET SERVEROUTPUT ON
variable rc NUMBER;
exec hr.prc_hello_world('$v_my_name', :rc);
exit :rc;
end_sql

echo "Your out parameter is: $?"

With these changes, I made three different call to my script:

Return Value 1

Return Value 2

Return Value 3

Well, we have done good so far. And there is one more topic that I want to mention in this post, and it is how you can use database table as a parameter for shell script variables. I don’t know that the following way is the best practice for this but, in my recent project we have created a parametric shell script that changes according to values on the database table using this method. The core technique is actually using standard output for parameters and parsing the values on shell script.

Let’s say, we need to copy files, that have some kind of pattern, from some source folders to different target folders for processing and the source path and target path are different from each other. We want to decide which file to where to put using ‘datatype’. Considering these, we can create a parameter table like below and fill it with values for example:

CREATE TABLE HR.SHELL_SCRIPT_PARAMS (
  DATATYPE VARCHAR2(200),
  SOURCE_DIR VARCHAR2(4000) NOT NULL,
  TARGET_DIR VARCHAR2(4000) NOT NULL,
  FILE_PATTERN VARCHAR2(4000) NOT NULL,
  CONSTRAINT PK_SHELL_SCRIPT_PARAMS PRIMARY KEY (DATATYPE)
);

INSERT INTO HR.SHELL_SCRIPT_PARAMS (DATATYPE, SOURCE_DIR, TARGET_DIR, FILE_PATTERN) VALUES ('filetype1', '/home/source/type1', '/home/target/type1', 'file1*.txt');
INSERT INTO HR.SHELL_SCRIPT_PARAMS (DATATYPE, SOURCE_DIR, TARGET_DIR, FILE_PATTERN) VALUES ('filetype2', '/home/source/type2', '/home/target/type2', 'file2*.txt');
INSERT INTO HR.SHELL_SCRIPT_PARAMS (DATATYPE, SOURCE_DIR, TARGET_DIR, FILE_PATTERN) VALUES ('filetype3', '/home/source/type3', '/home/target/type3', 'file3*.txt');
INSERT INTO HR.SHELL_SCRIPT_PARAMS (DATATYPE, SOURCE_DIR, TARGET_DIR, FILE_PATTERN) VALUES ('filetype4', '/home/source/type4', '/home/target/type4', 'file4*.txt');

commit;

Then, we can create a procedure that write necessary parameters to standard output for a given datatype. Optionally, I added a delimiter parameter to the procedure to manage delimiter only in one place. For best practice, exception must be logged in exception block, but for this example I skipped this.

CREATE OR REPLACE PROCEDURE HR.PRC_SHELL_SCRIPT_PARAMS(piv_datatype IN VARCHAR2, piv_delimiter IN VARCHAR2)
AS
    v_params VARCHAR2(4000);
BEGIN
    SELECT source_dir || piv_delimiter 
        || target_dir|| piv_delimiter
        || file_pattern
    INTO v_params
    FROM HR.SHELL_SCRIPT_PARAMS t
    WHERE DATATYPE = piv_datatype;

    DBMS_OUTPUT.PUT_LINE(v_params);
END PRC_SHELL_SCRIPT_PARAMS;

Then I changed shell script to call this procedure with given datatype and specified delimiter. I also added a lot of SQLPlus parameter to manage standard output nicely. As an example, I didn’t add copy part to below script. I just echo variables to console.

#!/bin/bash
echo "Script started"
echo ""

DATATYPE=$1
DELIMITER="#"

  params=`sqlplus -s hr/hr@localhost:1521/orcl << end_sql
WHENEVER SQLERROR EXIT 1 ROLLBACK
SET SERVEROUTPUT ON
SET PAGESIZE 0
SET FEEDBACK OFF
SET VERIFY OFF
SET HEADING OFF
SET ECHO OFF
SET LINESIZE 2000
SET TRIMSPOOL ON
SET TRIMOUT ON
SET TERMOUT OFF
SET WRAP OFF
exec hr.prc_shell_script_params('$DATATYPE', '$DELIMITER');
exit 0;
end_sql`

if [ $? != 0 ]
then
  echo ""
  echo "Failed to load parameters from database"
  echo ""
  exit -1
fi

SOURCE_DIR=`echo $params | cut -d$DELIMITER -f1`
TARGET_DIR=`echo $params | cut -d$DELIMITER -f2`
FILE_PATTERN=`echo $params | cut -d$DELIMITER -f3`

echo ""
echo "DATATYPE: $DATATYPE"
echo "SOURCE_DIR: $SOURCE_DIR"
echo "TARGET_DIR: $TARGET_DIR"
echo "FILE_PATTERN: $FILE_PATTERN"
echo ""

After the changes, I made two different call for one of them successful and the other is unsuccessful. Results are on below:

Shell Script Success Result

Shell Script Error Result

Developers Rock!!!

Posted in Linux, Oracle, Shell Script | Leave a comment

MySql UTF8 character set

MySQL database has two implementations for utf8 character set:

  • utf8 (As of MySQL 4.1)
  • utf8mb4 (As of MySQL 5.5)

In this writing, I am going to explain the difference between these two character set and how you can store data with utf8mb4 character set in your MySQL database.

The key point is, utf8 uses a maximum of 3 bytes per character and the utf8mb4 character set uses a maximum of 4 bytes per character. For this reason, utf8mb4 can store additional characters that cannot be stored by utf8mb3 (alias for utf8) character set.

To be consistent, I suggest that all of the below items have utf8mb4 character set if you want to store data properly. I haven’t try that but, you could loose data or maybe get an exception during dml statement if one of the items is not properly set to utf8mb4.

  • Server Character Set
  • Database Character Set
  • Schema Character Set
  • Table Character Set
  • Column Character Set

Now, I am going to explain how you can change each items’ character set value and how you can check the values of them. I did my experimentation on MySQL 5.7 and there could be additional or different steps for other versions. The configurations are inherited from each other and it should be enough to satisfy most generic condition. If you encounter a problem, you can check configurations from top to the bottom to be sure everything is as expected.

Server Character Set:

While MySQL database standing up, it reads necessary properties from different configuration files. You can find related information from this link. For my Ubuntu environment, I choose to change ‘/etc/mysql/my.cnf’ file and for my Windows environment I choose to change ‘C:/ProgramData/MySQL/MySQL Server 5.7/my.ini’ file. But, be careful while changing files. Because, if you define same property twice or remove a required property, your database will not stand up. Suspect from your conf files and investigate deeper your changes if any unexpected behavior occurs. Here is the configurations:

[client]
default-character-set=utf8

[mysql]
default-character-set=utf8

[mysqld]
character-set-server=utf8mb4
collation-server=utf8mb4_general_ci

In MySQL database, there isn’t any different configuration parameter to configure default-character-set option for utf8mb4 different than utf8. As you can see, we change character-set-server to ‘utf8mb4’ and collation-server to ‘utf8mb4_general_ci’. Collation affects the order of characters when you need ordering. Further reading about collations can be found from this link. After you change the files, MySQL Server must be restarted. Below commands may change according to service names and for Windows you may need to open cmd as an administrator.

On Linux:

sudo service mysql restart

On Windows:

net stop MySQL57
net start MySQL57

When MySQL server started, you can connect to database and run below command to check if your changes works:

show variables
where variable_name like '%char%';

You must see character set of server similar to below:
MySQL Character Set

Database Character Set:
Database character set can be specified when you are creating the database or you can alter the database after you created it.

During create phase below command will be enough:

CREATE DATABASE your_database_name CHARACTER SET utf8mb4;

For an existing database, you can change character set using below command:

ALTER DATABASE your_database_name CHARACTER SET = utf8mb4 COLLATE = utf8mb4_unicode_ci;

You can control the changes with the same query as server character set:

show variables
where variable_name like '%char%';

Database Character Set

Schema Character Set:
If you already executed above steps, you do not need to do anything specific for the rest. But just for the migration issues, I will explain how to alter and check the status of remaining database objects.

For an existing schema:

ALTER SCHEMA your_schema_name CHARACTER SET = utf8mb4 COLLATE = utf8mb4_unicode_ci;

You can check the changes with below query:

select *
from information_schema.schemata 
where schema_name = "your_schema_name";

Table Character Set:

For an existing table:

ALTER TABLE your_schema_name.your_table_name CHARACTER SET = utf8mb4 COLLATE = utf8mb4_unicode_ci;

You can control the changes with below query:

select *
from information_schema.tables t,
     information_schema.collation_character_set_applicability ccsa
where ccsa.collation_name = t.table_collation
and   t.table_schema = "your_schema_name"
and   t.table_name = "your_table_name";

Columns Character Set:

For an existing column:

ALTER TABLE your_schema_name.your_table_name MODIFY your_column_name VARCHAR(4000) CHARACTER SET utf8mb4;

You can control the changes with below query:

select *
from information_schema.columns 
where table_schema = "your_schema_name"
and   table_name = "your_table_name"
and   column_name = "your_column_name";

Connection String of Application:
If you want connection of your application to use utf8, you could use below connection string in your Java application. I think same approach can be implemented for other languages.

jdbc:mysql://localhost:3306/your_database_name?useUnicode=true

Developers Rock!!!

Posted in Java, MySql | Tagged , , , , , , , , , | Leave a comment

Injecting List with Spring from yaml

In my recent project, I need to fill a list from configuration file. With my old habits I just tried to use @Value annotation and my code just broke. Here is what I’ve tried first:

application.yaml:

segment:
  list:
    - SEG1
    - SEG2

AppRunner.java:

@Component
public class AppRunner implements ApplicationRunner {

    @Value("${segment.list}")
    private List<String> segmentList;

    @Override
    public void run(ApplicationArguments applicationArguments) throws Exception {
        segmentList.forEach(segment -> System.out.println(segment));
    }

}

However, it just exploded like below:

Caused by: java.lang.IllegalArgumentException: Could not resolve placeholder 'segment.list' in string value "${segment.list}"
	at org.springframework.util.PropertyPlaceholderHelper.parseStringValue(PropertyPlaceholderHelper.java:174) ~[spring-core-4.2.4.RELEASE.jar:4.2.4.RELEASE]
	at org.springframework.util.PropertyPlaceholderHelper.replacePlaceholders(PropertyPlaceholderHelper.java:126) ~[spring-core-4.2.4.RELEASE.jar:4.2.4.RELEASE]
	at org.springframework.core.env.AbstractPropertyResolver.doResolvePlaceholders(AbstractPropertyResolver.java:204) ~[spring-core-4.2.4.RELEASE.jar:4.2.4.RELEASE]
	at org.springframework.core.env.AbstractPropertyResolver.resolveRequiredPlaceholders(AbstractPropertyResolver.java:178) ~[spring-core-4.2.4.RELEASE.jar:4.2.4.RELEASE]
	at org.springframework.context.support.PropertySourcesPlaceholderConfigurer$2.resolveStringValue(PropertySourcesPlaceholderConfigurer.java:172) ~[spring-context-4.2.4.RELEASE.jar:4.2.4.RELEASE]
	at org.springframework.beans.factory.support.AbstractBeanFactory.resolveEmbeddedValue(AbstractBeanFactory.java:808) ~[spring-beans-4.2.4.RELEASE.jar:4.2.4.RELEASE]
	at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1027) ~[spring-beans-4.2.4.RELEASE.jar:4.2.4.RELEASE]
	at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1014) ~[spring-beans-4.2.4.RELEASE.jar:4.2.4.RELEASE]
	at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:545) ~[spring-beans-4.2.4.RELEASE.jar:4.2.4.RELEASE]
	... 23 common frames omitted

After that, I used a workaround for this problem and solve the issue with below annotation style on segmentList and application.yaml:

    @Value("#{'${segment.list}'.split(',')}")
    private List<String> segmentList;

application.yaml:

segment:
  list: SEG1,SEG2

I was basically, splitting a String with comma. It seems fine when I run the application. However, when I run my unit tests I get below error:

Unit test:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes = App.class)
public class AppRunnerTest {

    @Autowired
    private AppRunner appRunner;

    @Test
    public void testRun() throws Exception {
        appRunner.run(null);
    }

}

Exception:

Caused by: java.lang.IllegalArgumentException: Could not resolve placeholder 'segment.list' in string value "#{'${segment.list}'.split(',')}"
 at org.springframework.util.PropertyPlaceholderHelper.parseStringValue(PropertyPlaceholderHelper.java:174)
 at org.springframework.util.PropertyPlaceholderHelper.replacePlaceholders(PropertyPlaceholderHelper.java:126)
 at org.springframework.core.env.AbstractPropertyResolver.doResolvePlaceholders(AbstractPropertyResolver.java:204)
 at org.springframework.core.env.AbstractPropertyResolver.resolveRequiredPlaceholders(AbstractPropertyResolver.java:178)
 at org.springframework.context.support.PropertySourcesPlaceholderConfigurer$2.resolveStringValue(PropertySourcesPlaceholderConfigurer.java:172)
 at org.springframework.beans.factory.support.AbstractBeanFactory.resolveEmbeddedValue(AbstractBeanFactory.java:808)
 at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1027)
 at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1014)
 at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:545)
 ... 46 more

I’ve looked at a lot of annotation like @TestPropertySource and tried a lot of methods like injecting PropertyPlaceHolder to my test classes to run unit tests with this way but couldn’t manage it. The application context doesn’t aware of my properties because different application context is created when I run tests. At the end, I found this issue on Spring’s Jira system. That was the same issue I had and it guided me to @ConfigurationProperties annotation. And fortunately, I was using Spring Boot and already have this annotation in my path. With this guidance I add a new configuration class, changed application.yaml to list style again and get my list from configurations with the help of @EnableConfigurationProperties annotation at injection:

Config class:

@Configuration
@ConfigurationProperties(prefix = "segment")
public class SegmentListConfig {

    private List<String> list;

    SegmentListConfig() {
        this.list = new ArrayList<>();
    }

    public List<String> getList() {
        return this.list;
    }

}

Runner class:

@Component
@EnableConfigurationProperties
public class AppRunner implements ApplicationRunner {

    @Autowired
    private SegmentListConfig segmentListConfig;

    @Override
    public void run(ApplicationArguments applicationArguments) throws Exception {
        segmentListConfig.getList().forEach(segment -> System.out.println(segment));
    }

}

application.yaml:

segment:
  list:
    - SEG1
    - SEG2

When I dig into a little bit, I found that @ConfigurationProperties can do more. Here is a nice little blog post about how to use it.

I simulated my steps in different commits at this repository. You can check whole working code and not working codes from there.

Developers Rock!!!

Posted in Java | Tagged , , , , , , , , , , , , | Leave a comment

Adding New Library To Maven Repository

Sometimes, you find a custom library on the internet, or create it by yourself for something special. Probably, you cannot find those kind of libraries in maven repositories and add your maven dependencies for your maven based projects. But, if you need special library in your maven dependencies to package your project, there is a maven feature that enables you to install this library to your local maven repository.

For example I will show you how to add Oracle jdbc driver to your repository. I have Oracle XE in my computer and if you go to this path “C:\oraclexe\app\oracle\product\11.2.0\server\jdbc\lib” you will find Oracle drivers for your use. Or you can just download it from on the internet. Now let’s open a console and write this command:

mvn install:install-file -Dfile="C:\oraclexe\app\oracle\product\11.2.0\server\jdbc\lib\ojdbc6.jar" -DgroupId=com.oracle -DartifactId=ojdbc6 -Dversion=11.2.0 -Dpackaging=jar

After that you will see a screen like below:
Maven Install Repository

Then you can add this library to your pom.xml like below. Be careful that groupId, artifactId and version are identical with the command above.

<dependency>
    <groupId>com.oracle</groupId>
    <artifactId>ojdbc6</artifactId>
    <version>11.2.0</version>
</dependency>

Developers Rock!!!

Posted in Java | Tagged , , , , , | Leave a comment

How To Authenticate Http Level Provided Web Service

In this post, I will give you a code for to consume web services with protected http basic authentication. I pretend that, there is a web service called SimpleWebService and it has a method that called getDummy. Here is the code:

public class Test {

	public static void main(String[] args) {
		SimpleWebServiceImplService service = new SimpleWebServiceImplService();

		SimpleWebService simpleWebService = service.getSimpleWebServicePort();

		((BindingProvider) simpleWebService).getRequestContext().put(BindingProvider.USERNAME_PROPERTY,"USERNAME");
		((BindingProvider) simpleWebService).getRequestContext().put(BindingProvider.PASSWORD_PROPERTY,"PASSWORD");

		SimpleWebServiceRequest request = new SimpleWebServiceRequest();
		SimpleWebServiceResponse response = simpleWebService.getDummy(request);

		System.out.println(response);
	}
}

We just put necessary username and password parameters to our request context and that’s all. This the only operation that we have done different from regular web service call.

Developer’s rock!!!

Posted in Java | Tagged , , , , , , | Leave a comment