Loading...

Follow Dynamic Code Blocks | Microsoft Dynamics GP & ... on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Our Dynamics GP uses reporting services for generating sales documents, these documents often need to be emailed to customers and suppliers. It was recently brought to my attention that pressing the share icon on Acrobat DC to send a PDF by email launches a new share window. This window seems to often stall or take 30 seconds or longer to open sometimes with a spinning icon. I assume this is an attempt by Acrobat to draw people in the cloud document management solutions. For users transaction processing day in day out, this mounts up over a day and is annoying and inefficient.

Although I've not checked, I think this is because it will be contacting the Adobe servers in the background and our corporate firewall blocks a lot of traffic. Admin users don't seem to suffer this problem, which may be evidence of this being the case. 

Solution to slow to send email in acrobat:

To enable the envelope icon on the tools menu, to directly jump to attaching an email in outlook, avoiding this slow overlay share window,  then a registry change is required. The change is documented in the following acrobat resource, How to use the email icon to send a PDF directly as email attachment 

Basically the following registry change is required for DC users. Having applied this change outlook opens in sub-seconds  with the PDF attached. 

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Adobe\Acrobat Reader\DC\FeatureLockDown]
"bSendMailShareRedirection"=dword:00000000

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I had to rebuild the quantity on purchase order of the inventory site qtys table today (On Order in Item Enquiry).

Wrote this SQL script that works for our install with our settings, but you should verify your install does not have other modules or modifications that may make this break.

Comment out the UPDATEs and uncomment the selects to see what is going to happen first, running SQL against your production environment is dangerous, test, test, test first.

-- Rebuilds the qty on order in inventory from purchase order table
-- Don't know how manufacturing module would play with this
BEGIN TRANSACTION
-- Part 1 Correct the site/location qty on order from receipts and purchase orders
;
WITH CTE_ShippedQtys
AS (
SELECT PONUMBER
,POLNENUM
,SUM(QTYSHPPD) TotQTYSHIPPED
FROM POP10500
GROUP BY PONUMBER
,POLNENUM
)
,CTE_OnOrderQtys
AS (
SELECT sUM(QTYORDER - QTYCANCE - ISNULL(TotQTYSHIPPED, 0)) AS QTYORDER
,ITEMNMBR
,LOCNCODE
FROM POP10110
LEFT JOIN CTE_ShippedQtys ON POP10110.PONUMBER = CTE_ShippedQtys.PONUMBER
AND pop10110.ORD = CTE_ShippedQtys.POLNENUM
WHERE POLNESTA IN (
2
,3
)
GROUP BY ITEMNMBR
,LOCNCODE
)
UPDATE IV00102
SET QTYONORD = QTYORDER
--SELECT *
FROM IV00102
JOIN CTE_OnOrderQtys ON IV00102.ITEMNMBR = CTE_OnOrderQtys.ITEMNMBR
AND IV00102.LOCNCODE = CTE_OnOrderQtys.LOCNCODE
WHERE QTYONORD != QTYORDER

ROLLBACK TRANSACTION

 

 



--Summ up the location on order Qtys for the summary record
BEGIN TRANSACTION;


--Part 2 Correct the summary location
WITH CTE_SumLocations
AS (
SELECT SUM(QTYONORD) totqty
,ITEMNMBR
FROM IV00102
WHERE LOCNCODE != ''
GROUP BY ITEMNMBR
)
UPDATE IV00102
SET QTYONORD = totqty
--SELECT *
FROM IV00102
JOIN CTE_SumLocations ON CTE_SumLocations.ITEMNMBR = IV00102.ITEMNMBR
AND IV00102.LOCNCODE = ''
AND IV00102.RCRDTYPE = 1
AND QTYONORD != totqty
ROLLBACK TRANSACTION


Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 


Today I attended SqlBits in Manchester UK, where there was a session “Performance tuning SQL server on crappy hardware” by Monica Rathbun.

Monica has a the fast and punchy presentation style I enjoyed. Although I had already experienced or knew most of what was covered it was still a good presentation. There was one take away I noted in my notebook to comeback to later. Now back at the hotel I’m having a look.

Monica was promoting the use of COMPRESSION – not just backup compression but ROW/PAGE database compression in the database engine itself.

By compressing the data in the database, the theory goes that you reduce I/O required to move the data around and allow much more relevant data to be held in SQL server’s caches and perhaps the underlying storage system’s caches too. Having more data in memory leads to a more performant system.

For some reason the existence of compression in the database as was something that had slipped under my radar, perhaps because it used to be an Enterprise feature but now its available to me in our SQL2016 Standard Edition.


This is particularly interesting to Dynamics GP users as our database is full of padded CHAR data types, has very wide tables full of only partially used data (depending on modules used) or repeating data in the case of settings flags. Dynamics GP also has many tables full of decimal columns that are all zero, again due to configuration or options in how GP is set up or what modules are active. So from the outset it feels like Dynamics GP would benefit.

“Enabling compression only changes the physical storage format of the data that is associated with a data type but not its syntax or semantics”. This means the compression occurs inside the SQL engine but is transparent to the application interacting with SQL server. There are two levels of compression of interest and available to us. ROW compression takes each data row in the table,

  • It uses variable-length storage format for numeric types (for example integer, decimal, and float) and the types that are based on numeric (for example datetime and money).

  • It stores fixed character strings by using variable-length format by not storing the blank characters.

So imagine how much room can be saved when you consider the fields in Dynamics GP are fixed length!

What is more there is another option, PAGE compression that looks at repeating data within the pages of data stored on the filesystem and compresses that data. As this is over an entire page its more heavy on CPU resources but is great where there is a lot of repeated data down the rows of a table. Wait, repeating data down rows of a column? – That is what we get lots of due to status flags and little used fields in the GP tables that vary little from top to bottom of the table.

Just look at something like Item Master table IV00101 or one of the pricing tables etc. There are distributions and settings that are the same, repeated for all items and are ripe for compression as this leads to repeated content in the pages.


So both the nature of the data in the tables and the use of compressible data types by Dynamics GP sure makes it look good for compression.


Compression does cause more CPU load, but unless you are pulling millions of rows then it seems insignificant, see more here:

https://sqlperformance.com/2017/01/sql-performance/compression-effect-on-performance where it is proved it has little effect.


We can run EXEC sp_estimate_data_compression_savings 'dbo', 'IV00101', NULL, NULL, 'PAGE' ;

This will show us, by sampling a subset of the table, much as the statistics does, how much space should be saved by compressing the table, without having to actually do it. Let try with Item Master in Dynamics GP.

SELECT COUNT(*) from IV00101


EXEC sp_estimate_data_compression_savings 'dbo', 'IV00101', NULL, NULL, 'PAGE' ;

So we can see the item master table goes from 81,944KB to 16,304KB that is only 20% of what it was!

No trying it with IV00108  that has SELECT COUNT(*) FROM IV00108 = 6,107,169 rows and we get 779,816 going down to 139,440, that is only 19% of what it was before.

So you can see how much saving can be achieved this way, imagine the reduced I/O from having 20% of what used to be read.

Even going down to ROW compression gives you only slightly less compression but less overhead too:

Only 46% of what it was with row level compression.


Downside

There are not many downsides. The first is technical, compressing the data take up CPU, most SQL servers are not CPU bound in terms of resources, so this should not be an issue. Typically 10-30% increase, so check your current CPU load. As this is a table by table selection, you could tackle only the main most sizable tables in GP to get the majority of the benefit without having to apply compression to every table and index. When the data is written you take a hit on compressing it to reap the rewards later. So tables and indexes with great numbers of inserts per second may cause issues (have to be big loads).

The article below has some good scripts to see what will work and what will not…

https://thomaslarock.com/2018/01/when-to-use-row-or-page-compression-in-sql-server/


Upside

Much smaller data means less I/O more in cache. And more data in memory to make for more efficient queries.


Summary

I am going to gradually add tables to compression and see what happens to CPU usage. The benefits should be substantial in terms of reads so it seems well worth pursuing.


Support

This article would indicate its supported for Dynamics GP, although the tool referenced for choosing tables to compress is no longer available, however it is possible to manually work with the database to turn on compression.

https://blogs.msdn.microsoft.com/nav/2011/07/21/sql-server-data-compression-and-microsoft-dynamics/


This is the reason going to conference is so worth while, this is only one of many things I leant or got reinforced today in the various sessions I attended.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Copy assemblies into the correct reporting services folder

When migrating to a new SQL server the barcodes were causing an issue, firstly because we lacked the barcode assemblies in the reporting services bin folder for the new version of Reporting Services that came with SQL server 2016.

Finding the dlls still in the path for the old Reporting Services server service, they were copied to new active location as shown:


Permissions Error

After moving the required .dll into the correct location for this server, we then got a permission error.

Failed to load expression host assembly. Details: Could not load file or assembly 'Zen.Barcode.SSRS, Version=3.1.0.0, Culture=neutral, PublicKeyToken=b5ae55aa76d2d9de' or one of its dependencies. Failed to grant permission to execute. (Exception from HRESULT: 0x80131418) (rsErrorLoadingExprHostAssembly)

rssrvpolicy.confg

Setting permissions to the assembly for reporting services is a matter of adding some grants to the policy file. Insert the following block after the last </CodeGroup> tag in that file.

"Program Files\Microsoft SQL Server\MSRS13.MSSQLSERVER\Reporting Services\ReportServer\rssrvpolicy.config"

<CodeGroup class="UnionCodeGroup" Name="BarcodeControl" version="1" PermissionSetName="FullTrust" Description="This code group grants Zen.Barcode.SSRS.dll FullTrust permission."> 
<IMembershipCondition class="UrlMembershipCondition" version="1"
Url="C:\Program Files\Microsoft SQL Server\MSRS13.MSSQLSERVER\Reporting Services\ReportServer\bin\Zen.Barcode.SSRS.dll"/>
</CodeGroup>
<CodeGroup class="UnionCodeGroup" Name="BarcodeControl2" version="1" PermissionSetName="FullTrust" Description="This code group grants Zen.Barcode.SSRS.dll FullTrust permission.">
<IMembershipCondition class="UrlMembershipCondition" version="1"
Url="C:\Program Files\Microsoft SQL Server\MSRS13.MSSQLSERVER\Reporting Services\ReportServer\bin\Zen.Barcode.Core.dll"/>
</CodeGroup>
<CodeGroup class="UnionCodeGroup" Name="BarcodeControl3" version="1" PermissionSetName="FullTrust" Description="This code group grants Zen.Barcode.SSRS.dll FullTrust permission.">
<IMembershipCondition class="UrlMembershipCondition" version="1"
Url="C:\Program Files\Microsoft SQL Server\MSRS13.MSSQLSERVER\Reporting Services\ReportServer\bin\Zen.Barcode.Design.dll"/>
</CodeGroup>
<CodeGroup class="UnionCodeGroup" Name="BarcodeControl4" version="1" PermissionSetName="FullTrust" Description="This code group grants Zen.Barcode.SSRS.dll FullTrust permission.">
<IMembershipCondition class="UrlMembershipCondition" version="1"
Url="C:\Program Files\Microsoft SQL Server\MSRS13.MSSQLSERVER\Reporting Services\ReportServer\bin\Zen.Barcode.SSRS.Design.dll"/>
</CodeGroup>

At this point we were able to start using the barcodes again in reporting services.

Reminder on setting up barcode in a report

Create references in the Report>>Properties for the .dll files as shown

In the code tab of the report properties set a function that will return an image binary  that will be used as an expression for the image source in the report.

Drop an image into the report that will be the barcode and set the expression for the image MIME type as shown.

Example of the expression for the image, in this example, using the value of another field as the source for generating the barcode.

If you found this helpful- then do comment, it helps motivate me to document more of these kind of things.

Reference: https://www.barcoderesource.com/configurereportingservices.shtml

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

TL;DR Turn this option on, from false to true

Whilst looking at a problem SQL server instance, I was on the diagnostic journey looking at why the query plan cache was getting totally cleared every few minutes. It turned out the server had bad memory setup and SQL server was suffering some bad memory pressure. However I did learn about a setting that I find hard to think of a reason why you’d not want to use it in a typical (what install is typical tim?) setup. This option Optimize for Ad hoc Workloads tells SQL server to not cache the query plan for a query until it sees it twice.

This is important where the SQL query work load is very varied, particularly where ad-hoc, non-stored procedure queries are being ran as they can bloat the query cache. The query cache is used to store the compiled execution plan required to execute a particular query. When a new query is encountered, the plan is calculated and put in the cache to save having to compute the plan again, if the query is seen again. The problem is that with ad-hoc queries they are unlikely to be seen again, thus SQL server is using up memory unnecessarily that could be used for better things.

I understand, but need to verify, that this is also true of ORMs such as Entity framework, that although it does create parameter based SQL to execute, in the SQL it sends to the server, the length of those parameters can vary depending upon the length of the values in those parameters. Thus this can create a large number of query plans for what are essentially very similar queries. (OK technically the length of the searched text can vary the query plan but run with this).

The setting will greatly reduce the number of plans and memory used for them on the server as only if they the query is seen twice will it be fully cached. The first time it is seen a stub is created that is enough to spot if the query is seen a second time, only then will the query plan be cached. Truely ad-hoc queries wont be seen again and space in the cache is saved. The compile time is not really worth worrying about because having to compile the query twice is no big deal as after the second compile, the further 100,000k executions can come from the cache, so proportionally its an efficiency worth having for a tiny, tiny hit of compiling the query plan twice.

There is a good article here on how start looking into if you have bloat,http://www.sqlservercentral.com/articles/SQLServerCentral/91250/

There is a similar setting of Forced Paramatization for details of this setting see this post by Brent Ozar,  https://www.brentozar.com/archive/2018/03/why-multiple-plans-for-one-query-are-bad/ basically this setting forces the server to look more carefully at the plans and infer where parameters would exist if you were to paramatize the query. This is great for reducing plans in the cache but increases processor work as it has to examine queries a lot more to work out where the parameters lie. It can also lead to bad parameter sniffing as before the plans would have been totally bespoke, now they will be shared and the exact parameter can change the optimal plan.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Upgrading SQL server to SP4 or SQL Server 2016 encountered error 'msdb110_upgrade.sql' encountered error 2714, state 6, severity 25

When doing an dummy run for a SQL instance upgrade, I encountered this error and it resulted in the SQL service not starting after upgrade and the upgrade wizard reporting errors.

After a couple of attempts I had to dig into it to find what was going on so referencing the article SQL Server Service fails to start after applying patch. Error: CREATE SCHEMA failed due to previous errors I tried deleting the DatabaseMailUserRole Schema from msdb but the server still failed to start.

This was SQL Server 2012 so I checked the SQL server logs found in at C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Log\ERRORLOG.txt (that path may vary between versions).

The tail of the log looked like the following, where I had already removed the DatabaseMailUserRole, so it may have reported that too previously:

019-02-15 17:04:20.53 spid4s      Setting object permissions...
2019-02-15 17:04:20.63 spid4s      Error: 2714, Severity: 16, State: 6.
2019-02-15 17:04:20.63 spid4s      There is already an object named 'TargetServersRole' in the database.
2019-02-15 17:04:20.63 spid4s      Error: 2759, Severity: 16, State: 0.
2019-02-15 17:04:20.63 spid4s      CREATE SCHEMA failed due to previous errors.
2019-02-15 17:04:20.63 spid4s      Error: 912, Severity: 21, State: 2.
2019-02-15 17:04:20.63 spid4s      Script level upgrade for database 'master' failed because upgrade step 'msdb110_upgrade.sql' encountered error 2714, state 6, severity 25. This is a serious error condition which might interfere with regular operation and the database will be taken offline. If the error happened during upgrade of the 'master' database, it will prevent the entire SQL Server instance from starting. Examine the previous errorlog entries for errors, take the appropriate corrective actions and re-start the database so that the script upgrade steps run to completion.
2019-02-15 17:04:20.63 spid4s      Error: 3417, Severity: 21, State: 3.
2019-02-15 17:04:20.63 spid4s      Cannot recover the master database. SQL Server is unable to run. Restore master from a full backup, repair it, or rebuild it. For more information about how to rebuild the master database, see SQL Server Books Online.
2019-02-15 17:04:20.63 spid4s      SQL Server shutdown has been initiated
2019-02-15 17:04:20.63 spid4s      SQL Trace was stopped due to server shutdown. Trace ID = '1'. This is an informational message only; no user action is required.
2019-02-15 17:04:20.80 spid14s     The SQL Server Network Interface library successfully deregistered the Service Principal Name (SPN) [ MSSQLSvc/***] for the SQL Server service.
2019-02-15 17:04:20.80 spid14s     The SQL Server Network Interface library successfully deregistered the Service Principal Name (SPN) [ MSSQLSvc/***:1433 ] for the SQL Server service.

So it seems I had the same problem as the referenced blog post but with another schema role.

Solution

After running the upgrade and it failing, I started SQL server with the trace flag 902, using net start mssqlserver /T902 from an elevated command prompt. This prevents the startup scripts running.

Then connected to the SQL server instance using SSMS and located the 'TargetServersRole'  under the msdb database and right click, deleted it.

I then stopped SQL server and restarted it normally, without the trace flag, so it runs the 'msdb110_upgrade.sql' at startup again.

This time it started normally.

Problem sorted!

If this was helpful please comment as it motivates me to blog more!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

 
CREATE VIEW myschema.Admin_RunningSQLJobs
AS
SELECT job.NAME
,job.job_id
,job.originating_server
,activity.run_requested_date
,DATEDIFF(SECOND, activity.run_requested_date, GETDATE()) AS Elapsed
FROM msdb.dbo.sysjobs_view job
JOIN msdb.dbo.sysjobactivity activity ON job.job_id = activity.job_id
JOIN msdb.dbo.syssessions sess ON sess.session_id = activity.session_id
JOIN (
SELECT MAX(agent_start_date) AS max_agent_start_date
FROM msdb.dbo.syssessions
) sess_max ON sess.agent_start_date = sess_max.max_agent_start_date
WHERE run_requested_date IS NOT NULL
AND stop_execution_date IS NULL


Using this view we can check for a running job before running our job…
In this example we don’t want the nightly pricing build to run if a monthly build is in progress, note, nightly is scheduled later than monthly. So wrap the call to the stored procedure in the job step like this…
 
IF NOT EXISTS(SELECT * FROM myschema.Admin_RunningSQLJobs WHERE name='Pricing - Monthly Rebuild') 
BEGIN
EXEC myschema.Nightly Pricing build
END

 
 
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In previous blog posts we have talked about the way that other applications can drill down into parts of GP by using an direct approach via a pluggable protocol handler that is installed when GP is installed. Sometimes this does not work, to debug if the protocol handler has an end point to talk to follow these instructions.

Get Handle.exe

Download Handle.exe which is part of the SysInternals tools that Microsoft acquired some time back, written by Mark Russinovich,

https://docs.microsoft.com/en-us/sysinternals/downloads/handle

Extract the Zip file, if you have 64 bit system use the 64 bit version, otherwise use the plain version.

Use PowerShell to pipe out and decode handle.exe output

Open a PowerShell editor on your machine (e.g. start menu, search powershell, launch Windows PowerShell ISE)

In my case I extracted the handles.exe into the C:\Users\tw\Documents\ folder, you will need to change the path for where you extracted it to…

Execute the following PowerShell Command:

C:\Users\tw\Documents\handle64.exe net.pipe -accepteula

This will accept the user agreement and pipe the output of the handle viewer to powershell for decoding.

This gives the following output.

You can see that visual studio and Dynamics.exe have named pipes running.

If I close Dynamics GP, then run it again…

You see that Dynamics .exe is not listed as having named pipes available anymore as it is closed.

[System.Text.Encoding]::UTF8.GetString([Convert]::FromBase64String("bmV0LnBpcGU6Ly8rLw=="))

net.pipe://+/

Using powershell to list all

[System.IO.Directory]::GetFiles("\\.\\pipe\\")

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The PowerShell modules for GP allow a number of DevOps activities. For example you may install the Dynamics GP Lesson (sample) company TWO and install the system database via these PowerShell modules. I take full advantage of the PowerShell modules when running and building Docker images for Dynamics GP. 

Today whilst doing this I encountered the following error when moving to build the image for a different version of Dynamics GP:

Macro failed to complete successfully.

The PowerShell script works by creating a macro file in the GP\data folder, the PowerShell script then runs DexUtils passing the macro file as a command parameter. The macro file tells utilities what it needs to do (in this case create the system database), you can see this macro in the folder below, captured after this error occurred in the Docker container. CreateSysetmDatabase.mac is the macro file that PowerShell is using to create the system database. 

We have no idea at this point what the error actually is, as the error description of “Macro failed to complete successfully” is not very helpful. However you will notice also in that folder the “DexMacro.log” file. Lets look in that file by reading it with the command “type DexMacro.log”.

So below is the output of that command, the contents of the log.

There we can see the issue, I’m using SQL Server 2017, I already know this is only compatible with GP2018 onwards. However you will notice from the folder name I’m trying to use it with GP2016. The error is badly worded, it says you need to upgrade to server 11 from 14. It actually means you need to use an older version of SQL for this version of Dynamics GP. Something I already knew but my focus wasn’t in this area at the time I was developing the Docker container.


Please comment if you found this helpful – motivates me to blog more!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

If you are using Dynamics GP with Visual Studio 2017 –>  , you will find that the project templates for Dynamics GP do not show up in the new project or new item menus.

I wrote a solution to this in the form of a Visual Studio extension, but it seems that the extension is not showing up when it is searched for in the extension gallery, something brought to my attention via the Microsoft forums.

However if you page through the results then go back to page one of the results, then all of a sudden it appears in the list!

Dynamics GP Extension for developing Dynamics GP Add ins - YouTube

I’ll be bringing this to the attention of the VS team, in the meantime the video shows this work around.

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview