Mariano Gomez is an innovative, award winning, results-driven entrepreneur, and blogger. He has also delivered numerous presentations at Microsoft Dynamics Convergence, Microsoft Dynamics GP Technical Airlift, reIMAGINE, GPUG Amplify, and the GPUG Summit.
In the previous 3 articles of the series, I talked about the rationale for selecting a container based environment for development purposes, we also installed Docker and downloaded and installed the Microsoft Dynamics 365 Business Central containers for Docker. This then set us on a path to installing the development IDE and selecting a source code control provider to host our AL solutions.
This article in particular, will focus on the installation of Visual Studio Code (VS Code) and the AL language extensions for the environment.
Installing VS Code
VS Code is to BC developers what Dexterity is to Dynamics GP developers. VS Code provides the IDE required to incorporate the AL language extensions to develop BC integrating solutions. Although SanScript is integrated into the Dex IDE, the analogy still holds.
2. Click on the Download button on the upper right corner of the menu bar. You will then select the Windows 7, 8, 10 option to download the actual installer.
3. Once you have downloaded the installer, choose to run the executable. This will initiate the wizard from which you will follow a set of guided prompts to complete the installation.
4. Acknowledge the license agreement and click on Next to continue. You will then be asked to enter a new installation folder or accept the default - personally, I find that the defaults work best.
5. The installation process will then lay down all the files and register all appropriate components so you can begin using the application.
6. Once the installation is complete, please proceed to click on Finish. This should launch VS Code if you left the checkmark selected.
Installing AL language components and utilities
One of the aspects I like about VS Code is the extensions concept. Extensions are simply, plug ins that augment the VS Code environment. One such extensions is the AL language extension, created by Microsoft.
1. Click on the Extensions button on the left Activity Bar (square button). Type "AL" in the search bar to proceed. This should surface the "AL Language" extension by Microsoft. Click on Install to add this extension to VS Code.
2. Install PowerShell by Microsoft. Following similar process, click on the Extension button and type PowerShell. If you prefer working in the PowerShell ISE environment or from the PowerShell command prompt, that's entirely up to you, but know there's a PowerShell extension for VS Code, which brings the entire language into the VS Code IDE.
3. Install GitLens by Eric Armodio. Following similar process, click on the Extension button and type GitLens. With GitLens you can visualize code authorship at a glance via Git blame annotations and code lens, seamlessly navigate and explore Git repositories, gain insights via powerful comparison commands, and much more.
4. Install Insert GUID by Heath Stewart. Insert GUID is a simple command extension for Visual Studio Code to insert globally unique identifiers (GUIDs) into the Code text editor in a variety of formats.
5. Install Docker Explorer by Jun Han. With Docker Explorer you can manage Docker containers, Docker images, Docker Hub and Azure Container Registry right from VS Code.
The "Hello World" project serves to test the entire installation up to this point and is the first foray into the world of AL extensions.
1. Press Ctrl+Shift+P on your keyboard to open the VS Code Command Palette (Alternatively, you can choose View | Command Palette from the menu). Type AL:Go! to locate the option to create an AL project.
2. Enter a local folder where you would like to store the project. In this case, I simply removed the last portion of the folder name and replaced with HelloWorld.
3. You will immediately be prompted to select the server type you will be running this project against. Since we've deployed the local containers, it's safe to say we can choose Your Own Server from the drop-down
The above operation results in the creation of a launch.json file that is added to the project.
4. Proceed to replace the server name, currently defaulted to localhost, to the name assigned to your BC container, in this case http://demo-bc. Change the instance to NAV from the default, BC130.
Press Ctrl+Shift+P and type AL:Download Symbols to retrieve all the Windows symbol packages for debugging purposes. More information on AL Windows Symbol Packages here.
5. Press Ctrl+Shift+B on your keyboard to compile the project and create the publishing package for our "Hello World" extension.
This extension simply sets a trigger to the OnOpenPage() event of the Customer List page that displays the message "App published: Hello word". The page is loaded by default as specified in the launch.json file.
6. Press F5 on your keyboard to launch the application in debugging mode. This should launch BC and present the message above.
Once the message has been cleared, the application will continue to load the Customer List.
In the next article, I will talk about connecting to a source code repository and what else we need in order to get our environment fully ready. I will also cover some techniques that are much more adaptable to Microsoft Dynamics GP developers as far as working with AL files and folders and how we can leverage our Dexterity knowledge here to help us administer our projects. Until next post! MG.- Mariano Gomez, MVP
In Part 2 of this series, we covered the full installation of Docker Desktop, used to run the Dynamics 365 Business Central containers. We also saw how to use PowerShell to enable both the Hyper-V and Containers features on Windows 10.
This article will focus on the installation and troubleshooting of the Dynamics 365 Business Central containers and will provide step by step instructions on how to accomplish this. Remember, there are quite a bit of resources out there, so here they are:
But the goal of this series is to help Microsoft Dynamics GP ISVs draw similarities and contrasts with their multi-developer Microsoft Dexterity development environments.
Now that Docker is been installed, we can effectively proceed to lay down the BC containers. This will create a full virtualized environment with all the BC components needed for development purposes. This equates to having a full environment with Microsoft Dynamics GP, Web Client, IIS, and SQL Server in place for developers to code against.
Business Central Containers Installation and Troubleshooting
1. To begin the installation, we must install the NavContainerHelper PowerShell module from the PowerShell Gallery, which contains a number of PowerShell functions, which helps running and interacting with the BC containers.
In the process of installing the NavContainerHelper module, you will be asked to add the latest NuGet provider to be able to retrieve any published packages. After the installation of the NuGet provider, I went to import the NavContainerHelper module and ran into the following error, advising me that running scripts was disabled on the system I was attempting to install on.
By running the Get-ExecutionPolicy command, I was able to identify that all PowerShell execution policies on my machine were set to Undefined, which in turn prevents unsigned scripts from being executed.
Since I was installing this on my local machine, I simply wanted to bypass any restrictions within the current user scope.
2. With the installation of the NuGet provider and the changes to the script execution policies in place, it was time to call Import-Module to add the NavContainerHelper module.
Importing the module is a quick step.
3. Finally, it's time to create the BC containers. This is done by calling the New-NavContainer function (from the NavContainerHelper module). You will be prompted to create a user name and password to access the container and BC once installed. Here's the full call:
4. The container files are downloaded onto disk and are extracted.
5. Once all the files are extracted, the container is initialized by Docker. If all goes well, you should see a message letting you know that the container was successfully created.
Container created successfully
If you close the PowerShell window, you will notice a new set of icons on your desktop that will allow you to load BC running on the container, as follows:
Demo-bc Web Client: shortcut to the BC web client application
Demo-bc Command Prompt: access to the container command prompt
Demo-bc PowerShell: access to the PowerShell prompt running on the container
Demo-bc Windows Client: launches the Microsoft Dynamics NAV on-premises client
Demo-bc WinClient Debugger*
Demo-bc CSIDE: launches the CSIDE development environment for BC.
Desktop after a successful BC container deployment
Double-click on the Demo-bc Web Client icon to test the container deployment.
With the installation of Docker and BC containers, we have completed all the supporting environment setup. Be sure to play around with the new options, in particular, with both BC web client and Windows client components. It is important you begin to gain an understanding of the functional aspects of the application, before you embark in developing for this platform - nothing different than what you already did for Dynamics GP.
We are not quite done here, but since I am supposed to be a rational human being and respect the number of parts I chose for this series, I will start a new series showing how to add Visual Studio Code along with selecting and connecting to a source control repository, to close out this topic, so bear with me. Until next post! MG.- Mariano Gomez, MVP
In Part 1 of this series, I outlined the principles and detailed the reasoning behind why we chose to build our Microsoft Dynamics 365 Business Central development environment using Windows Docker containers.
In the Dynamics GP world, we are not quite used to containers, so let me start with the definition, straight from the horse's mouth (so to speak). According to the wizards over at Docker, "A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings".
The first thing to highlight from the definition is, "standard unit of software". In fact, that's key to this whole thing! Standardization ensures that every developer in the organization is building and testing code against the same reliable environment. In the Dynamics GP world, although we have the ability to build stable reliable development environments, consistency is not always something that we can achieve easily, unless we are using desktop virtualization, which intrinsically poses its own challenges.
But this article is about installing Docker. So let's get to it. Installing Docker
Windows 10 Anniversary Update (build 1607) saw the introduction of Windows containers, a feature that allows you to install and deploy Docker and other container virtualization technologies. Follow these steps to complete a successful installation of Docker.
NOTE: from now on, most of the work will be done in PowerShell.
Enable Windows Containers feature
1. Open Window PowerShell (not PowerShell ISE) with elevated permissions. Click on Start and type "PowerShell". Choose "Run as Administrator" to continue.
2. You must first enable Hyper-V. In PowerShell type the following command:
NOTE: If you previously installed Windows Containers, you must first uninstall it then reinstall it using the PowerShell. This means you need to backup all your container images prior to completing this command.
It is recommended to reboot your machine after this operation to allow all components to be properly registered.
Download and Install Docker
1. To complete Docker installation, you must first go to https://www.docker.com then choose the Products | Docker Desktop.
Products | Docker Desktop
You must create an account with some basic info, if you don't have one already: user name, password, and email, in order to download Docker. Next, confirm your email address by clicking on a link you will receive in the inbox corresponding to the email associated with the Docker account you created. You now log into Docker Hub and download the Docker Desktop for Windows engine. By default, this would be placed in your download folder, unless your browser has been configured differently.
2. Once you've got through the account validation and download process, proceed to run the Docker installer (installer.exe). Upon launching the installer, the process begins with downloading a number of installation packages.
2. During the configuration screen, you will be prompted to select whether you want to run Windows containers vs Linux containers. The choice here should be obvious, but you have the ability to change this after the fact.
4. Upon clicking OK, the installer begins to unpack all files accordingly.
5. If everything goes as expected, you will be asked to sign out and sign back into Windows.
6. After signing into Windows, the service will initiate and you will be presented with a window to enter your Docker account information. This, according to Docker, is to track application usage.
7. I don't know if this is a bug in the installer, but even after selecting to run Windows containers in step 2, I had to manually right-click on the Docker task bar item and select to switch to Windows containers.
It is always good to test Docker to ensure everything is functioning as expected. For this, we can turn to PowerShell once more and execute any of the 2 following commands:
docker --version docker info
Docker version and information commands
These steps conclude the installation of Docker. In the next installment, we will deploy the actual Microsoft Dynamics 365 Business Central containers and prepare you for what's next.
Hope you find this useful. Until next post! MG.- Mariano Gomez, MVP
As of recent, I've been honing on my Microsoft Dynamics 365 Business Central (BC) skills, without leaving my beloved Microsoft Dynamics GP behind. One of the things that I have been working on is making sure customers understand the BI insights gained via data replication between the two systems. As a result, I am always working through the replication configuration a few times a month.
Yesterday, I removed a previous Fabrikam company created via replication from BC and attempted a new replication - If you are not familiar with the configuration of the data replication process between GP and BC, I will be creating a video on this soon, so please stay tuned.
NOTE: The integration runtime service has also been updated, so you will probably need to download a new version.
After setting up the Integration Runtime Service and clicking Next to establish the connection between Intelligent Cloud and my on-premises GP, I received the following error:
"SQL database must be at compatibility level 130 or higher"
The article seems to indicate that compatibility level 130 was a requirement since the January 2019 release, but also seems to suggest that this is only for the NAV / BC replication process, not GP. In fact, as I mentioned before, just a couple weeks ago, I was able to create the replication with compatibility level 120.
As it so happened, my attempt to replicate Fabrikam happened on April 2, 2019. As it turned out, this particular BC release introduced Intelligent Cloud synchronization for GP historical data. Since, this version of the sync uses JSON to track changes between the previous sync and the current one being executed, it requires databases to be at compatibility level 130 at the very least. This requirement wasn’t completely documented in the April '19 release notes but the release notes aren’t always 100% complete at the time of posting either.
With that said, customers need to be aware that historical data replication will require Microsoft SQL Server 2016 at the very least. These changes will be documented in the April '19 release notes and an entry will be added to the GP 2018 system requirements page.
Hope you find this information useful. Until next post, MG.- Mariano Gomez, MVP
I just recently returned from the Microsoft MVP Global Summit 2019 where I had a chance to meet some of the top minds in the Microsoft PowerApps and Flow space. This was a truly exciting moment as I have been learning from the very same MVPs I met - yes, we do learn from each other!
In one of my hallway discussions, I ran into my buddy Mehdi Slaoui Adaloussi, Principal Program Manager at Microsoft, who I first met at Microsoft Build 2018. I had read Mehdi's recent article on reusable components and, in particular, that I had been playing with his version of the Numeric Up/Down Control.
I must start by saying that the components Mehdi put in place expose some very clever implementation techniques, so I highly recommend you download the msapp files and load them up in your environment and study them.
The Numeric Up Down control in particular, caught my attention as it required multiple and repeated individual clicks to advance the value up or down, which could take away from the user experience, so I decided to build from where Mehdi left off, by changing a few things.
NOTE: my implementation does not account for the control stylistic settings added by Mehdi, but this is sure an easy feat to accomplish.
NOTE: You will need to enable the Components experimental feature, before you can follow these steps.
1. Create a new component
Click on New Component under the Components tab to create your component. Rename the default name to NumericUpDn.
2. Add the controls needed to create this component.
For this control, we will need the following 6 controls:
2 button controls (from the toolbar)
The Up icon (from the Icon gallery)
The Down icon (from the Icons gallery)
A Timer control (from the Controls gallery)
A Text Input control (from the Text gallery)
I always recommend you worry about the layout and aesthetics at the very end of the implementation. Nonetheless, I keep the controls close for ease of organization at the end of the implementation.
The most important thing right now is to get the needed controls. I will also explain the use of each control as we go along.
3. Add Component custom properties
For the Numeric Up Down control, we will need 5 custom properties as follow:
Default: Number / Input. This will serve to seed our numeric initial text input value when the control is first loaded within an app.
Min: Number / Input. This will be the lower limit for our numeric up/down control. When clicking the down button, we will check to ensure the control value itself never gets below the minimum value.
Max: Number / Input. This will be the upper limit for our numeric up/down control. When clicking the up button, we will check to ensure the control value itself never exceeds the maximum value.
Sensitivity: Number / Input. this will control how fast or slow the button press behaves to increase or decrease the numeric value in the text input field.
Value: Number / Output. This will be the value returned by the control to the calling app.
4. Rename the Controls
Now that we have all the controls and custom properties in place, we will begin by renaming the controls for readability sake and ease of following - it's also a good practice.
Button1, rename to BtnUp
Button2, rename to BtnDn
Icon1, rename to IcnUp
Icon2, rename to IcnDn
TextInput1, rename to NumValue
Rename component controls
NOTE: renaming the Timer control seems to break the timer itself - this is a bug I have reported to the PowerApps team.
5. Add some logic
NumValue control: for this control, change the Format to Number. We will want to ensure the text input control Default property is set to the incoming Default custom property value if the initial value is blank, as follows:
Timer1 control: this is perhaps the most important control on the component, since it will basically control the overall behavior of the timer. First, let's set some properties:
Start property. We will want to start the timer when either the BtnUp or BtnDn pressed events are fired. Since the Start property is a true/false control (boolean) we can set the property to BtnUp.Pressed || BtnDn.Pressed.
Duration property. We will set this property to NumericUpDn.Sensitivity. Basically, we are setting a delay between each increment or decrement of the NumValue control.
Repeat property. Set to true. Since we want to persist the button press event, we want the timer to restart each time after the Duration cycle is completed.
Reset property. We need the Timer to reset each time either button is released from a Pressed state. Hence, we can use the same true/false state as the Start property, BtnUp.Pressed || BtnDn.Pressed.
Timer control Data settings
Phew! We are done with the basic settings for the timer control settings.
Next, the timer must perform a couple actions: 1) on start, it will evaluate which button was pressed, and based on the button, increase or decrease the value in the text control. 2) on end, it will evaluate whether we've reached the lower or upper limits established by the Min and Max custom properties, respectively.
For the OnTimerStart event,
For the OnTimerEnd event,
BtnUp and BtnDn controls: we also want a user to retain the ability to click the buttons without persisting the pressed event, effectively advancing the NumValue control one by one until the upper or lower limits are reached. Hence we must also add some validation to the OnSelect event of each button.
For the BtnUp OnSelect event,
BtnUp OnSelect event
For the BtnDn OnSelect event,
BtnDn OnSelect event
6. Now some aesthetics We have completed our low code implementation. Now, we are off to setting organize the controls and set some properties that will make this a useful control.
a) Select the BtnUp control and set the size to 62 width and 24 height; set the x and y positions to 257 and 2, respectively. Set the Border Radius property to 5. Set the Fill property to RGBA(56, 96, 178, 0). Set the BorderColor, HoverColor, and HoverFill properties to BtnUp.Fill. Clear the Text property (blank).
b) Select the BtnDn control and set the size to 62 width and 24 height; set the x and y positions to 257 and 26, respectively. Set the Border Radius property to 5. Set the Fill property to RGBA(56, 96, 178, 0). Set the BorderColor, HoverColor, and HoverFill properties to BtnDn.Fill. Clear the Text property (blank).
c) Select the IcnUp control and set the size to 64 width and 25 height; set the x and y positions to 256 and 1, respectively. Set the Fill property to RGBA(0, 18, 107, 1); set the Color property to RGBA(255, 255, 255, 1).
d) Select the IcnDn cotrol and set the size to 64 width and 26 height, set the x and y positions to 256 and 26. Set the Fill property to RGBA(0, 18, 107, 1); set the Color property to RGBA(255, 255, 255, 1).
NOTE: By setting these properties, the buttons and the icons are now overlaid on each other. To further access these control properties, use the control navigation pane on the left of the Design Studio.
e) Select the NumValue control and set the size to 256 width and 51 height, set the x and y positions to 0 and 1
f) Finally, set the NumericUpDn component size to 322 width and 57 height.
You should now have something that look like this:
NumericUpDn component (shown at 150%)
7. Testing the Component To test the component, I have added the control from the Components gallery, a slider for the timer sensitivity, and a couple Text Input boxes, along with a label to track the output from the componentized control. You can quickly guess what goes to what.
NOTE: please ensure the Text Input boxes are of numeric type.
The end result can be appreciated in this video.
Numeric Up/Down Control with persisted button press using Components - YouTube
You can download the component control from the PowerApps Community Apps Gallery, here. Until next post, MG.- Mariano Gomez, MVP
A lot of the guiding principles for deploying Named Printers in a Terminal Server or Citrix environment comes from two of my favorite articles, written by my good friend and fellow Microsoft Business Applications MVP, David Musgrave (twitter: @winthropdc). David happens to be the creator of Named Printers and probably understands the product better than anyone I know. You can read his articles here:
These articles continue to be very relevant if you are in an environment where a Print (or Printer) server is the norm and published printers are standard. Print servers are used to interface printers with devices in a network, but mostly to standardize administrative policies, and balance the document load that printers can manage. Part of the standardization is to ensure printers are uniquely identified across the networks, regardless of whether you are accessing the network remotely or physically connected to it. Print servers also ensure that print drivers are consistent across the network, which in turn reduces the possibility of driver clashes or unsupported drivers.
If you are familiar with Named Printers, one of the things it likes is standard drivers and standard printer names. The minute the binary information - stored at the OS level - about a printer driver or name no longer matches the binary information stored by Named Printers - at the database level - about the same printer, chances are Named Printers will cease to work properly. However, in a print server environment with published printers, this is easily fixed by reconfiguring the printer properties in Named Printers.
But, why am I telling you this? In the era of BYOD and remote offices, system administrators no longer have the time or the willingness to be dealing with such mundane tasks as worrying about printers and drivers. Heck, most of us work from our home now or a roaming between different offices. Yet, as users, we still need the ability to perform the simple, mundane task of printing documents and generate reports from our ERP system. Enters printer redirection.
Printer redirection was first implemented in Windows 2000 Server. Printer redirection enables the users to print to their locally installed printer from a terminal services session. The Terminal Server client enumerates the local print queues to detect the locally installed printers. This list is presented to the server and server creates the print queue in the session. The TS client provides the driver string name for the locally installed printers and if the server has matching drivers installed then the printers will be redirected. When we look at Printers on the Terminal Server, a redirected printer will have a name similar to what is shown below:
Note the printer name is presented with a Printer_Name (redirected sessionId) label. The session Id changes each and every time the user logs in and logs out of the terminal services session. Given what we know about Named Printers, it's safe to say this would wreak havoc causing errors, like the following, to show up during printing:
Document_Printer "Printer_Name (Redirected SessionId)" or PaperSource "sourceInfo" is not valid
You can go back into Named Printers and recapture the printer properties if need be, but the same will need to be done each and every time a user logs in and logs out of the terminal services session. If you have more than one user directing documents to the same physical printer via Named Printers, then this solution (recapturing the printer properties) is simple unusable.
So, what can be done?
Thinking about the problem, I realized this could not be just a Microsoft Dynamics GP/Named Printers issue. There are a multitude of applications designed to capture and store printer properties they rely on to consistently create a print experience. I started wondering how others are dealing with this issue. So onto Google I went to request search terms like "rename printers", "rename redirected printer", etc., I finally ended up with a very interesting hit on a company called Babbage Technologies, located in Minnesota. Babbage have a small product called RenPrinters which in essence applies a regular expression to the redirected printer name and allowing you to specify a static name with a combination of the printer name, user name, and machine name. You can pick and choose which combination to use. This is done at the server operating system level, which then allows you to map that static named printer to Named Printers.
The following shows the main application control panel:
There are a number of predefined regular expressions along with a number of predefined printer name formats. You could configure named printers to use a template user name scenario or create a template per machine depending on your specific needs. Another important feature is the ability to exclude printers using specific drivers from being renamed, giving you greater control over how the application behaves.
A server reboot and now the printer appears as defined by the Printer Name Format expression:
This is super useful now as Named Printers is once again happy: standard printer name, standard properties!
Now, to be fair, there are other solutions in the market. There's an open source solution called Printerceptor currently available on GitHub. Printerceptor uses PowerShell to rename redirected printers and uses the same concept of regular expressions and name formatting to do the job. Of course, open source means you are subjected to the developer's availability to fix a problem, if one is found.
Hope you found this informative and helpful. Until next post, MG.- Mariano Gomez, MVP
In Part 1 of this series, you saw how my first version of the digital clock went. Although it got the job done, it was plagued with repetitive code, repetitive controls, and over saturation of variables, which in turn rendered the application hard to follow, and worse yet, affected performance.
In this article, I will show how to use PowerApps Components to promote reusability and decrease the code footprint. Components is currently a preview feature, hence word of caution when using them as you may need to retest your app once it becomes generally available.
The previous experience showed us that we can save time and code by creating a component to be used for the digits of the clock. This digit component could then be enhanced by allowing the developer to pass in the digit to be displayed and the foreground and background colors the segments - all set up as custom properties to the component - as shown here:
We have also added code for each of the segments that will bring them to the foreground or place them in the background, based on the DigitValue custom property. Here's a code snippet for Fill property for the top segment of our digit:
Note that here we need to reference the name of the component within the scope of the variable. All the code to implement the additional segments can be found in the previous article or by downloading a copy of the msapp file for this project.
Once we have the component in place, we can then move to app surface, where we add the 3 digits as components, the separating dots, and 4 timers as in our previous app. Since the code to activate the segments is in the component itself (as shown above), there's no need to add 3 buttons to encapsulate that code anymore.
Hence our first timer control, Timer1, will simply do 2 things:
On start, it will evaluate the night mode toggle and set the proper background and foreground depending on the setup parameters (on the Setup screen)
On end, it will advance the digit counter.
NOTE: Each timer is set to 1000 milliseconds with the Repeat property set to true.
The end result is a super streamlined application, with a reusable component and little code to go along, keeping up with the Low Code spirit of PowerApps.
For the full implementation of this project can be found on the PowerApps community website, here. Until next post, MG.- Mariano Gomez, MVP
When I first set out to build a digital clock with night mode, I figured I was just going to start from what I know. Each digit of the clock is composed for 7 segments and each segment would behaves in a binary way based on the number that it needs to display.
To create each segment, I would use a Rectangle from the Icons gallery. Since I initially set out to add 3 digits to the clock - two to display seconds, and 1 to display minutes - this would require a whopping 21 rectangles, plus 2 for the blinking dots to bring the total rectangles to 23.
Controlling each digit required the use of 3 timers. Each timer would have to evaluate a value to determine which segments to display. For the right most digit, the following code is used for the timer, Timer1:
But keep in mind that this is only for the first digit! For the second digit (from right to left), a second timer evaluates whether the MyCounter variable is also greater than 9 to advance yet a second counter for the second digit. Timer2 OnTimerStart()
In turn, we need to evaluate this second counter to ensure it does not exceed a value of 5.
The Button2 code now looks awfully similar to the code we saw for Button1, but controls the display of segments for the second digit. Except for the name of certain variables, in fact, the code is the same!
Now, we need to repeat the above for the left most digit all over again. Once again, we introduce a new timer with events that will evaluate the counter status to advance a third variable, and a button to encapsulate the code that will evaluate the segments to be activated for the third digit. Timer3 OnTimerStart()
In turn, we need to evaluate the first and second counters for when they reset to zero, to advance the minute counter.
As can be concluded from this exercise, the amount of (repeated) code, variable declarations, and controls increase progressively per 7-segment digit added. Another thing I noticed is, timers are not as responsive and the application is slow, overall - not the type of experience you would like and end-user to have.
NOTE: I am not sure if components is right for every application case, but one thing that I will insist on is to group objects and controls wherever possible to facilitate application readability and control navigation as an app builder.
Here's the final result:
PowerApps: Digital Clock - YouTube
On the positive side, building this application, using the pain staking process above thought me how I could leverage PowerApps components in order to promote code reusability, improve user experience, and increase application responsiveness. That will be the subject of my next article, Part 2 of this series.
This version of the app can be downloaded here (OneDrive). Until next post, MG.- Mariano Gomez, MVP
As of late, I have been very interested in all things "Motion" as it relates to developing PowerApps applications - see #PowerApps: Motion Patterns with Parametric Equations. Although, most of the apps you will see tend to be around solving business problems, you cannot really dismiss the capabilities of PowerApps as a gaming platform.
In this article, I explore a simple object proximity and collision detection approach, based on some simple logic. The world of gaming uses more sophisticated algorithms based on the laws of physics and what's not, but keep in mind that PowerApps is designed to be a low code/no code environment, hence access to user driven programmatic methods is extremely limited.
The following is a representation of the actual canvas apps I created for this example. It consists of 4 directional arrows to provide motion to smiley, a simple character added from the icons gallery. In addition, we have a rectangular obstacle added as well using the icons gallery. Finally, a slider which controls the steps of the smiley character in any direction.
There are also two hidden button controls, which we will use to do branch logic. I have found that having code in hidden buttons act as a way to encapsulate logic that you can execute as if it were a method, as you can use PowerApps' Select function to run the On Select code behind the button.
For ease of setting control properties, I have grouped both the directional arrows in one group called Arrows, and the hidden buttons in another group called BusinessLogic, as shown on the left pane of design surface.
Adding Code Screen On Visible
For the On Visible event, we are simply initializing the position of our smiley character. Smiley will be placed at roughly the center of the screen. The (X,Y) coordinates are stored in two context variables called xPos and yPos, which are then assigned to the X and Y property of the Smiley character.
You can set the obstacle icon away from Smiley, anywhere you want to.
Note: you could add a couple controls to input the position and width and height of the obstacle, but for simplicity sake, I have chosen to do this manually.
Directional Arrows Group The directional arrows will control Smiley's motion up, down, left, or right, according to the arrow pressed. The up arrow will simple reduce the position on the Y axis, the down arrow will increase it. Conversely, the left arrow will decrease Smiley's position on the X axis, and the right arrow will increase it. All these movements are done in incremental steps dictated by the slide control value, setting the pace of the movement, if you will.
Note that I have chosen to define a directional variable, SmileyDirection, which tracks whether the motion is decreasing (up, left) or increasing (down, right). Up Arrow
Business Logic Group
Each time Smiley's motion is affected by the directional arrows, we need to evaluate whether its next step in that direction will put it in proximity with the obstacle, hence leaving it in danger of colliding on the next move.
The first logic branch says:
If Smiley's next step in the current direction (top, bottom) is within the horizontal boundaries of the obstacle - dictated by the value pair Rc1.X and Rc1.X + Rc1.Width - it will only be in the clear if it's outside of the vertical boundaries (the obstacle's height). Otherwise, Smiley would be in imminent danger of colliding on the next move in the current direction of motion - and this is where it gets interesting!
We also need to factor Smiley's own height when the motion is in the direction of the obstacle from the top as the value pair (Smiley.X, Smiley.Y) only represents the upper left corner of Smiley's position. In complex gaming design, this is done from the center of gravity of the object, but remember, we are not that fancy here.
If Smiley is not within the horizontal boundaries of the obstacle, then we need to similarly evaluate if Smiley it's within the vertical boundaries of the obstacle, while traveling left or right - dictated by the value pair Rc1.Y and Rc1.Y + Rc1.Height. Then, it stands to reason, that Smiley will only be in the clear, if it's outside the horizontal boundaries in the direction of travel.
Again, we want to factor in Smiley's width to ensure that if we are traveling towards the object from the right, we detect its proximity more accurately.
The Warning Label
Each EvalLogic button updates the CollisionWarning context variable, which is then assigned to the Text property of the label (place within the directional arrows pane).
The Final Results
When all the pieces are in place, the following is the final results:
Simple Object Proximity and Collision Detection - YouTube