Loading...

Follow dotnetvibes - SQL Server on Feedspot

Continue with Google
Continue with Facebook
or

Valid

I am delighted to be speaking at SQL Saturday Jacksonville 2019. This will be my first time speaking at this event.

SQL Saturday is a free training event for Microsoft Data Platform professionals and those wanting to learn about SQL Server and Cloud Technologies. It is a great opportunity to listen to some of the best speakers around, interact with the brightest minds and network with the awesome SQL Family.

SQLSaturday Jacksonville will be held on May 04 2019 at The University of North Florida Campus, 1 UNF Drive, Jacksonville, Florida, 32224, United States.

You can view the Event Schedule here –https://www.sqlsaturday.com/820/Sessions/Schedule.aspx

There is a great mix of topics with varied skill levels around Microsoft Data Platform, starting from Database Development,  Cloud Development, Analytics and Visualization, Professional Development, Enterprise Database Administration & Deployment and more. The event looks fun and seems to have a ‘Star Wars’ theme with a SQL Jedi Clinic for assisting attendees with any current challenges/questions they might be having. 

Coming back to my own talk, I will be presenting on ‘DevOps, Continuous Integration & Database Lifecycle Management: Rule them all‘.  Join me in this session to understand the problems with traditional database development, why organizations are moving towards achieving Continuous Integration and Database DevOps, the problems it tries to solve and learn about the toolsets which will assist you in this journey towards seamless database deployments. I will also be talking about the design principles which will help you to develop data-intensive cloud native applications.

I hope to see you there. Cheers!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Last week during a presentation, one of the attendees asked me about my favorite feature in Azure Data Studio. This was an interesting question and I thought of writing down a quick blog post on this.

It’s been over a year that I have almost stopped using Visual Studio and SQL Server Management Studio during my day to day work. These are the 2 tools which I have used most of my career as a Developer. I decided to move from Windows to Mac at work, hence I needed some tools which were cross-platform. Visual Studio Code and SQL Operations Studio(Azure Data Studio) came to the rescue. Both these tools are awesome — lightweight, super fast and highly extensible.

Azure Data Studio is an open source, cross-platform data management tool that works with SQL Server, Azure SQL Database and Azure SQL Data Warehouse from Windows, macOS and Linux machines. SSMS continues to be the flagship product for performing administrative tasks in the Microsoft Data Platform, however from a development perspective you can leverage the lightweight tool for your data development purpose — Azure Data Studio. I have been developing Microservices during the past 1 year where I am primarily executing DDL and DML scripts against the database, not requiring any administrative tasks– hence using Azure Data Studio was a good fit.

Azure Data Studio is an open source GitHub project. There are monthly releases comprising of new features/enhancements and bug fixes to address feedback from the community.

February release of @AzureDataStudio is now available!

– Admin Pack for SQL Server extension
– Auto-sizing columns in results
– Notebook UI improvements
– Profiler Filtering
– Save Results as XML
– Deploy scripts

Learn more in the blog post #SQLServer https://t.co/KXJ1ZZjpIX

— Azure Data Studio (@AzureDataStudio) February 13, 2019

You can create new issues and track progress in GitHub — https://github.com/Microsoft/azuredatastudio/issues

Azure Data Studio provides a lot of Extensibility options and the Extension Model is actually my favorite feature of this tool. There is no need for huge software installs and in the process getting tools/functionalities which you don’t require. The base install of Azure Data Studio is very small & lightweight. Extensions provide an easy way to add more functionality to the installation. With Azure Data Studio you can customize your environment with the tooling you need.

For a period of time, I kept on bumping into high memory usage while working with Visual Studio and SSMS at the same time on my C# & SQL projects — causing the application to frequently crash & have slow performance. Restarting the system helped to temporarily fix the memory issues. If I had to install a new version of these tools, upgrading it after hours was a good option especially with Visual Studio. This used to impact my overall productivity at work. But this is not the case anymore with the VS Code and Azure Data Studio combination!

The January Release of VS Code introduced a new feature wherein you can install extensions without forcing a reload (restart) of VS Code. You are no longer required to reload VS Code when you install or enable an extension. This feature was icing on the cake – since the installation of an extension anyways used to take just a few seconds. Hopefully this same feature will be incorporated very soon as part of the Azure Data Studio!

Since Azure Data Studio is built on top of VS Code, most of the extensibility APIs are available. A lot of capabilities of Azure Data Studio are built as Extensions – and hence based on your changing requirements, you can enable/disable any extensions. There are a number of Extensions which are available and has been made by Microsoft, its Partners and Community members.

Few of my favorite extensions which I have been using on a day to day basis are —

Azure Data Studio is built on the same framework as Visual Studio Code, so extensions for Azure Data Studio are built using Visual Studio Code. If you are interested to create your own Azure Data Studio extension, you can go through this tutorial.

Catch the excellent informative session presented by Vicky Harp at SQLBits 2019 about Azure Data Studio here.

Another common question which is often asked is about SSMS vs Azure Data Studio. You can read more about it here.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Problem Statement

We bumped into a Flyway error while trying to deploy some new schema changes against an existing database. It was a Java Microservice using Jenkins as the CI/CD tool for deployment. The build pipeline was not able to deploy the schema changes to our Test/DevQA environment, since it was not able to connect to Eureka.

When we looked at the logs in SumoLogic, we found multiple errors logged by the application indicating issues with Flyway —

Application startup failed

Error creating bean with name ‘flywayInitializer’ defined in class path resource

Invocation of init method failed; nested exception is org.flywaydb.core.api.FlywayException: Found non-empty schema(s) without schema history table! Use baseline() or set baselineOnMigrate to true to initialize the schema history table.

“source”:”stdout”,
“level”:”ERROR”,
“message”:”Application startup failed”,
“logger”:”org.springframework.boot.SpringApplication”,
“thread”:”main”,
“class”:”org.springframework.boot.SpringApplication”,
“exception”:”org.springframework.beans.factory.BeanCreationException”

Use Case

Reading through the error details/stack trace,  I was able to understand what was going on here. The microservice already had a database associated with it and was deployed across all the environments. Flyway was not used for deploying the initial schema changes, and it seemed like the database deployment was done manually.

As a best practice, we are trying to use Flyway as the Database Migration Framework for executing DDL and DML scripts for our Java microservices. This threw the application startup error, since Flyway found non-empty schema(s) without schema history table during deployment.

Let’s see in details how Flyway works to understand this better.

How does Flyway work?

If you want to spin up a new DB instance in another environment, Flyway can do it for you in a breeze. At application startup, it tries to establish a connection to the database. It will throw an error, if it is not able to.

It helps you evolve your database schema easily and is reliable in all instances. There is no need to execute the database scripts manually.

Every time the need to upgrade the database arises, whether it is the schema (DDL) or reference data (DML), you can simply create a new migration script with a version number higher than the current one. When Flyway starts, it will find the new script and upgrade the database accordingly.

Flyway scans the file system and sorts them based on their version number.

Flyway creates a table name ‘schema_version‘ in your database. This table is responsible for tracking the state of the database and keeps an explicit record for the various sql scripts that has been executed. As each migration gets applied, the schema history table is updated.

Resolution

Since we were trying to make schema changes by introducing Flyway on an already existing database containing a table, it threw an application error. There is no existing ‘schema_version’ table in the database, hence Flyway was not able to track the state of the database and execute the correct SQL Scripts from the application repository.

However if there was no existing database and we were building the schema from scratch for the first time, this would not have been a problem. Flyway would have successfully created the database and executed the schema changes.

Since this application is already running in Production — dropping the table, letting Flyway recreate the new table and the ‘schema-version’ table, populate the data in the existing table was out of scope.

So we had to figure out a way to intimate Flyway that it is dealing with a database with existing tables. You can do that by explicitly setting the flyway baseline-on-migrate property to True in the application.yml file.

flyway:
enabled: true
schemas: EmployeeHistory
locations: classpath:/sql

flyway.baseline-on-migrate: true

From Flyway Documentation —
https://flywaydb.org/documentation/configfiles

# Whether to automatically call baseline when migrate is executed against a non-empty schema with no schema history table.
# This schema will then be initialized with the baselineVersion before executing the migrations.
# Only migrations above baselineVersion will then be applied.
# This is useful for initial Flyway production deployments on projects with an existing DB.
# Be careful when enabling this as it removes the safety net that ensures
# Flyway does not migrate the wrong database in case of a configuration mistake! (default: false)
# flyway.baselineOnMigrate= true

Once I set the baselineOnMigrate property to True and triggered another pipeline build, I noticed the creation of the schema_version in the DB with the below record —

However the new schema changes were not made by Flyway & I did not see the changes in the database.

Point to note here is that since we performed the Baseline, Flyway set it as the initial version in the schema_history table. So, if you have your sql file prefixed with ‘V1__’ it wont work. For Flyway migration to work, you need to rename the file to ‘V2__’

Once I made this change and pushed a Jenkins build, I was able to see the script executed by Flyway and an entry made in the ‘schema_history’ table.

The Jenkins build ran successfully and changes were deployed to all environments —

Hopefully this blog was helpful to you. Incase this does not resolve your issue, please feel free to comment below and I would be happy to assist.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I am happy and honored to be a Friend of Redgate 2019 — my 2nd year in a row.

The Friends of Redgate program is an exclusive group of influential and active community members, such as popular blog writers, speakers, consultants, as well as Microsoft Data Platform MVPs.

Redgate develops tools for developers and data professionals and maintains community websites such as SQL Server Central and Simple Talk. You can find more about the wide range of tooling provided by Redgate here.

I had a good time connecting with the Redgate Team and fellow Friends at the 2018 FoRG Dinner during PASS Summit, Seattle.

You can find here the full list of the FoRG Family 2019.

Thanks Redgate for the opportunity. I am looking forward to another awesome year.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

It’s almost time for SQL Saturday Pensacola and I am excited to be speaking at this fun filled event this weekend. I had lot of fun last year during this event & I am looking forward to have an awesome time this year too.

SQL Saturday Pensacola will be held on Jun 02 2018 at Pensacola State College, Main Campus, 1000 College Blvd, Pensacola, Florida, 32504.

SQL Saturday is a free training event for Microsoft Data Platform professionals and those wanting to learn about SQL Server, Business Intelligence and Analytics. It is a great opportunity to listen to some of the best speakers around, interact with the brightest minds and network with the awesome SQL Family.

You can view the Event Schedule here – http://www.sqlsaturday.com/743/Sessions/Schedule.aspx

There is a great mix of topics with varied skill levels around Microsoft Data Platform, starting from Database Development/ Administration,  Cloud Development, Business Intelligence and more.

Coming back to my own talk, this year I will be presenting on ‘DevOps, Continuous Integration & Database Lifecycle Management: Rule them all‘.  Join me in this session to understand the problems with traditional database development, why organizations are moving towards achieving Continuous Integration and Database DevOps, the problems it tries to solve and learn about the toolsets which will assist you in this journey towards seamless database deployments. 

I presented this talk at CodeStock, Knoxville and SQL Saturday Atlanta & received great feedback from the attendees. I am planning to take a step further and cover some advanced topics this time.

I hope to see you there. Cheers!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I am excited to be presenting at SQL Saturday Atlanta this weekend. This will be my 2nd year speaking at this awesome conference. I am looking forward to meet lot of folks from the Tech community and have a good time with friends and SQLFamily.

Seems like there are more than 900 folks registered already, so hope to have an absolute rocking time out there. You can find the session schedule here –

http://www.sqlsaturday.com/733/Sessions/Schedule.aspx

We currently have 888 people signed up for #SQLSatATL
Speakers better bring their "A" game because attendees will be there, waiting for it.

— SQL Saturday Atlanta (@SQLSatATL) May 16, 2018

Coming back to my own talk, this year I will be presenting on ‘DevOps, Continuous Integration & Database Lifecycle Management: Rule them all‘.  Join me in this session to understand the problems with traditional database development, why organizations are moving towards achieving Continuous Integration and Database DevOps, the problems it tries to solve and learn about the toolsets which will assist you in this journey towards seamless database deployments. 

Sneak peak below and will be the focus of my entire talk —

I hope to see you there. Cheers!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Blue-Green Deployment is a software pattern to deploy and release your application with minimal downtime and risk. This is achieved by maintaining multiple production-ready environments at the same time – termed as Blue and Green.

At any point of time, one of the environment is active & receives all the production traffic. When time comes for a new release, the changes are deployed to the other non-active identical environment. This can serve as the staging server and all the sanity testing is performed here. Once the changes are verified, you just flip the switch and all the traffic is now shifted to this new environment – which becomes the new Production environment. The previous Production environment still exists, and if there are any unexpected issues with the deployment, the traffic is shifted back to the old environment. So with just a flip of the switch we redirect Production traffic between 2 identical environments which is running the current and new version of the application.

Rolling back complex application and database changes is not easy. It might take hours of development team effort to attempt a clean rollback.

With Blue-Green Deployments, if there are issues with the deployment in Production, there is no need to spend hours planning and performing an application rollback. All we need to do is to modify the load balancer settings and shift all the traffic to the old server. There is no downtime, no effort lost in rollback and most importantly no impact on Business Users. It does not matter if it is during Business hours, because all you need to do is to shift traffic to your old server – which you know for sure works.

Handling Database changes with this strategy might be a challenge, but it is something which can be mastered by following some best practices.
Keep in mind, sharing a single database is a better approach while doing a Blue-Green Deployment – instead of having 2 separate database and have strategies to keep the data in sync.

There are few things which you need to keep in mind while dealing with Database changes in a Blue-Green Deployment —

  • Ensure that all migrations are idempotent – meaning that running a script more than once has no additional impact.
  • Don’t make destructive database changes – Do not drop a column. If you want to move data from 1 column to the other, do not delete data from the old column – which might break the old version of your application.
  • Ensure that your changes are backward compatible. If you want to add a new column, make the field nullable or add a default value so that both versions can run without issues.

I would highly recommend you to not make breaking changes and follow the ‘Expand and Contract Pattern‘ for your database changes to make sure that your database changes are backward compatible.

For example, if you want to make a breaking change like renaming a column name then follow the below steps —

  • Expand — Instead of renaming the existing column name, create a new column with the updated name.
  • Migrate — Move the data from the old column to the new column.
  • Contract — Once you verify that the code is functioning all right with the new column, then delete the old column name and make it non-existent.

Blue Green Deployments are an industry wide proven deployment strategy to increase reliability and uptime of your application. It is a change at a hardware level and not application level. However to prepare your application and make it ready for Blue-Green, you will need to adopt and make your application changes backward compatible and idempotent. It is worth noting the fact that it comes with an additional cost of maintaining 2 Production ready servers/infrastructure, however the benefits easily overweigh the hardware cost.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview