Loading...

Follow The Information Lab on Feedspot

Continue with Google
Continue with Facebook
or

Valid

As Tableau users increasingly seek insight on the go, mobile dashboards are becoming an increasingly common request. For some time now, Tableau has included the ability to create layouts for different device types, notably, desktop (Default), tablet and phone.

In the most recent release (2019.1) Tableau enhanced the capability by introducing automatic dashboard phone layouts. This means that if you’ve built a dashboard and published just the desktop version, tableau will do its best to generate a mobile layout for you if a user views it on a phone. In the video below, we take one of the sample workbooks, switch it to a phone layout and you can see the Tableau automatically flows the content to suit the orientation and phone format with a long scroll based dashboard. This would be what the user sees if you didn’t add a phone layout yourself.



This feature comes enabled by default and is a great time saving, but in some cases, you might want to switch this feature off to build something more bespoke or you don’t want the web version of your workbook to have a phone layout that you’ve maybe not had the chance to check. To do this you have 2 approaches.

The first is to keep the automatic phone layout creation on but once you have your layout, customise and make it bespoke as I did int he video above when I switched to ‘Edit layout myself’.

The second is to disable it in the menu settings so the option isn’t on by default. This way if you only build a desktop design and publish it up, users on mobile devices will still only see the desktop design as the automatic creation isn’t enabled. Note, if you use Tableau Public, auto-generated phone layouts are off by default so no need to do this but likewise this menu is how you enable it.

The post Disabling the auto generation of phone layouts in Tableau appeared first on The Information Lab.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Note, if this blog looks familiar, that’s because its been ported from my personal blog site, benjnmoss.wordpress.com.

Early I was doing some playing around, looking to manipulate file path strings to develop an application that unions all files in the same location and of the same type as a template file (yes I know these macros already exist but I wanted to develop my own to enable me to understand the workings).

The route I went for was that the user browsed to one of the files, and that I would then manipulate this path and essentially replace the filename with the Alteryx wildcard character ‘*’.

This means that I wanted to manipulate a string that looks like this

C:\Users\Ben Moss\Desktop\filenamehere.filetypehere

into something that looked like this…

C:\Users\Ben Moss\Desktop*.filetypehere

Given the syntax within filepaths I knew that anything after the last backslash () character and the first (and only) full stop represented the filename.

So my challenge was to acknowledge the positions of these characters.

Finding the location of a character is relatively simple, and most of you will probably know that, in Alteryx, the following formula…

findstring([field],”.”)

Will return me the position of the first full stop within the field.

It is possible, using a series of nested findstrings with a combination of the right and length functions that we can find the nth occurence of the character.

Something like

findstring([Field1],”\”)

//position of 1st backslash

+

findstring(right([Field1],length([Field1])-findstring([Field1],”\”)-1),”\”)

//position of 2nd backslash in string trimmed from after first back slash

+1

//add character because the first backslash is taken out of the equation but it still forms part of the string

Would return the 2nd instance for example. However, what if we don’t know the value for N. We don’t know the value of N (filepaths can contain any number of back slashes), but we do know we need the final one, thats for sure.

That’s when, whilst browsing Alteryx’s string functions, I fell upon the ‘ReverseString()’ function, which does exactly what it says on the tin.

ReverseString(‘Ben’) for instance would return ‘Neb’.

So

ReverseString(C:\Users\Ben Moss\Desktop\filenamehere.filetypehere)

would return

erehepytelif.erehemanelif\potkseD\ssoM neB\sresU\:C

Which whilst looking likely absolute nonsense, this is actually very useful as we can now use the findstring() function to return the position of the 1st backslash in our reverse string, before then taking that away from the length of the string to return it’s actual position.

Now we have acknowledged our position we just need to combine all of our formulas to develop our final wildcard path.

And there you go, that’s how I resolve my string issue.

Now, 2019 Ben has had a bit more exposure to Alteryx than 2017 Ben, and he now knows about the function ‘FileGetFileName()’, which could have been used in this scenario.

FileGetFileName([FilePathField])

However, that is only useful in filepath scenarios, but this blog has far wider applications than that!

Ben

The post Finding the last instance of a character withing a string using Alteryx appeared first on The Information Lab.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

How do you read 01/05/2019? Do you read it as the first of May 2019? Or the fifth of January 2019? If you’re an international organisation, you might have two different people reading the same date two different ways. Awkward!

What is an ISO date then? ISO is the International Organisation for Standardisation. They created the ISO calendar to stop confusion when talking about dates internationally. Instead ISO writes dates as YYYY-MM-DD (2019-05-01 is the first of May 2019).

ISO 8601 also labels weeks differently too. The first week in a year is where you’ll usually notice the biggest difference to the standard week numbering of the Gregorian calendar. ISO week 01 of the year starts when the week has a majority (4 or more) of its days in January. ISO weeks also always start on a Monday.

In 2016 for example the 1st, 2nd and 3rd of January were in week 53 of 2015 in the ISO calendar. Not week 1 of 2016 as would be in the Gregorian.

Tableau Desktop 2018.2 updated to include ISO 8601 dates and calendar. Why is that important? Well if you work in an industry or business that uses International Standard dates, this update will save you having to write some really tricky date calculations in Tableau.

So how can we use ISO formats for our dates in Tableau Desktop? To start you’re going to have to check the locale of your computer. Only European locales are supported. Then you’ve got a few options:

Data source date properties

Always want your dates in your data source to be in ISO format? Then you’ll want to change that at the data source level. Simply right-click your data source in the data pane and select date properties.

You’ll also want to make sure your week start day is set to Monday too.

Then you’ll want to change the date format option from automatic to custom and read on below.

Formatting date fields

You can edit the format of the date field in the data date properties box (as above). This will then be the default format of all dates in the data source. If you’d rather not have all dates formatted as ISO in your data source, then you can select the date fields individually from the data pane instead.

The [Y] tells Tableau you want to use the ISO calendar. If you wanted to use the Gregorian simply change it to [y]. Gregorian is also the default format in Tableau still.

Date calculations and syntax

Tableau has a lot of really useful date functions you can use in your calculations. Now you also have some new syntax you can use in these functions to specify the results are displayed as ISO dates.

Most of Tableau’s date functions have syntax for defining the date_part. Here’s an example of DATENAME in operation:

Here we’re asking Tableau to return to ISO week of our Order Date field. We’re specifying ‘iso-week’ in the calculation syntax.

As we’ve specified we want the ISO week returned, you can see the results (in the third column) are quite different from the Gregorian week number (in the second column).

Here’s a list of the Tableau date_part syntax, including the new ISO date parts.

Now with ISO dates in Tableau you can make sure your organisation is always on the same page.

The post Quick guide: ISO 8601 dates in Tableau appeared first on The Information Lab.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Before we start, my colleague Paul Houghton has already produced a great blog on performing sentiment analysis via the MS Cognitive Services and via a dictionary, this blog is slightly different, in that I will look at how we can leverage the ‘code friendly’ aspect of the Alteryx Platform, through the R tool, to perform sentiment analysis.

A couple of my Colleagues, Nils Macher and Soha Elghany looked at, and developed this concept whilst working on a client engagement, I have since ‘ripped it off’, and created an Analytical Application which is hosted in the Alteryx Public Gallery, that allows users to score there own files for sentiment (note, the application is pending approval as it uses the ‘R’ tool to perform the analysis (as mentioned several times already :)))


This post by Gwilym Lockwood (another colleague), also gives a great example of applying sentiment analysis, in this case to the messages he shares in his relationship with his partner!

So tell me more about the syuzhet package…

In order to perform our sentiment analysis in Alteryx, using the R tool, we need a library to use; in this case, we will be using the ‘syuzhet‘ library.

This library is not installed by default with Alteryx’s R installer, so point 1 is, install the package, which you can do by following the guidelines in this post.

The syuzhet library allows for an array of different methodologies to perform sentiment analysis, which make use of different ‘dictionaries’ to return the sentiment score.

This resource gives a lot more detail than I can ever give, with much greater wisdom on the subject area too!

So how can I leverage this package in Alteryx?

Well, it’s simple(ish) really; first, we must contain our data table to the R tool; we must then read our data into R by using the ‘read.Alteryx’ function.

At the beginning of our code, we should also load the syuzhet library…

library(syuzhet)

##load the syuzhet package

tab <-read.Alteryx(“#1″, mode=”data.frame”)

##read our single column table into Alteryx as a dataframe

Now, in order to use this package, the field we wish to score must be a character vector.

tab$text <- as.character(tab$text)

##convert our text field to be a character vector

The next step is to score our text column for sentiment, which can be done by using the get_sentiment() function; in this example, I have scored sentiment using each of the different methodologies available in this library.

tab$afinn = get_sentiment(tab$text, method=”afinn”)
tab$bing = get_sentiment(tab$text, method=”bing”)
tab$nrc = get_sentiment(tab$text, method=”nrc”)
tab$syuzhet = get_sentiment(tab$text, method=”syuzhet”)

##score our text field for sentiment using the 4 different methodologies


Finally, we can output our dataframe, titled tab, to the ‘1’ output anchor…

write.Alteryx(tab,1)

##write our dataframe out of the R tool so it can be transformed further using standard Alteryx tools

This initial piece of code will provide us with an output along the lines of that shown in the image below

You’ll see how each of the different methodologies have different scales and different results based on it’s dictionaries interpretation of the sentence.

I would advise that you select the methodology in advance of writing your code, as otherwise you may select the methodology that best represents your expectations; a level of selection bias if you will.

With the nrc methodology it is possible to supplement your text streams further, and view exactly what emotions were identified in the text string; we can then write this out into our workflow as a separate output.

emotions <- get_nrc_sentiment(tab$text)

##get the individual emotion scores for each text string for the nrc methodology


write.Alteryx(emotions, 2)

##write out the emotion scores to the 2nd output node

Once we have generated this 2nd table, we can then merge the two data streams outside of the R tool using the standard join tool (if you are proficient in R, of course you could do this inside, but I prefer to do any tasks that are possible with the standard Alteryx tools, with the standard Alteryx tools).

The output of this additional emotions table is shown below…

You can see how the second sentence contains only negative emotions, such as anger, disgust, fear and sadness; this allows you to take your analysis a step deeper and identify possible micro-trends in peoples interactions with your business.

And that is how you can use R and Alteryx to perform sentiment analysis on your data; simple right!

If you want the workflow shown in this example, then just download the app, and start taking it apart and embedding in your own workflows and macros!

Ben

The post Leveraging R to perform Sentiment Analysis in Alteryx appeared first on The Information Lab.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Before I write this post, I must credit fellow Alteryx Ace Mark Barone, who provided a series of great responses in a post on the community which helped guide me on this subject area; also, Craig Bloodworth, my colleague and CTO of The Information Lab, who let me know this was actually possible!

Also, I say hidden, that’s not strictly true, it’s just not well known!

So what do I mean when I say ‘username variable’?

When a user runs an analytical app in Alteryx Server, you may want to use their information in order to manipulate how the application generates it’s output.

You could create a manual text box, where they enter there name, but of course, they may get their details incorrect, or they may just pretend they are someone else!

The ‘username variable’ allows this process to happen automatically, by grabbing the ID of the individual who is running the app.

So give me an example of how this could work in the real world…

Sure; let’s take the following usecase.

Please note the below is a fictional usecase

In our business we create and share applications with our different clients. In order create a secure barrier, we host our clients data in different databases; our server is also structured, so each client has access to their own Private Studio, ensuring any bespoke content is made available only to the correct client.

As part of our work, we ask our clients to submit their own information, which is then transferred into their respective database.

We were approached to convert this submission form into an Alteryx application.

We could approach this in two ways;

1. Create an analytic app for each client, with a output data tool in each app, pointing at the appropriate database

2. Use a single analytic app, where we capture the detail of the individual running the application and thus detour the data into the appropriate database

Of course, we are going to look at the second option, primarily because this would mean we can maintain our analytic app in a single location, which will reduce strain on our team.

So how can we do this?

Well, let’s start by building our app as if we would require our user to give their own details into the app, via a text box; in this simple example, I require my users to submit ratings for a given day, which I have set up using the ‘Date’ and ‘Numeric Up Down’ interface tools.

Given that I want to write the data into a database for the client (a company), rather than for the unique user, I will need to perform a lookup against a table with user names and their client name. To do this I will perform a lookup against the MongoDB which contains this information (each user is assigned to a private studio, which is the company name, which itself, is the suffix to the database with which they data needs to be entered into).

These streams can then be merged together and the client data written to the appropriate database.

Now, as I mentioned above, at present I have just created a ‘text box’ to allow our user to input their email address which can then be matched to the MongoDB data to get the appropriate private studio and client detail, lets now discuss how we can ‘hide’ this from our user, and their personal ID automatically be tracked by the application, to do this we need to complete the following steps…

1. On our text box input, remove the question name that we have given, so it’s blank, this can be done in the configuration pane

2. Also in the configuration pane, check the option ‘Hide control (for API development)’; this means the user won’t be shown this question

3. On the ‘annotations’ tab in the configuration window for the text box interface tool, change the ‘Name’ attribute to ‘__cloud:UserID’

This step truly is our ‘secret sauce’, it’s how this trick works.

This variable returns the unique identifier, which matches the, ID field given in the ‘users’ table in the MongoDB underneath our Alteryx Server.

We can then use this as our identifier to bring through the private studio detail through and write our data into the appropriate data.

So lets see this in action

The below video demonstrates the behaviour in a similar usecase to that highlighted above (form entry and result into specific table).

If you have any questions, make use of the Alteryx Community, and I’m sure there will be many of us to reach out and help!

Ben

The post The hidden ‘username variable’ available when running applications in Alteryx Server appeared first on The Information Lab.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
The Information Lab by Peter Gamble-beresford - 2w ago

One of the great things about Alteryx is that you can quickly generate ideas or sketches of logic to test a theory or prove a concept on the fly. Recently, I needed to do this when helping someone with one of their Alteryx workflows, and they were impressed by a trick I used to quickly generate some fake data to test that concept, so I thought it worth sharing.

Why might you need to do this? I can think of a couple of reasons this might be useful:- You may not have access to a final production data set and might just want to get a draft concept working before switching over to your master data. You may be training other people in Alteryx and you don’t have quite the necessary sample data to hand to demonstrate something with. You may need to create scaffold data within a workflow to supplement your actual data in order to solve a data modelling problem. You may want to share a piece of work but the real data is just too sensitive so a fake placeholder dataset could help. The concepts detailed below will be useful in many scenarios, not just generating fake data.

So what do we need to do?

I had just a little reshaping challenge to do, so I knew what the key measures were, with just one date dimension, you can whack these in the handy text input tool, use an append tool to perform a cartesian join to get all combinations of your dimension values, and measure names. (If you have more dimensions, you’d need to do another text input and append tool for each.) At this point I’ve kept the measures in one column so I can generate values for them all very quickly.

Now to generate the numbers, this is simply the use of a formula tool and the Random Integer function. A quick cross tab of the measure names then creates a useful normalised data table that I can work with.

You may wonder why I used the random integer function rather than the random function, and this is because the random function gives you a random number between 0 and 1 so all numbers are decimals, whereas with the Random Integer function, other than all being integers, you can specify a maximum value, which gives you a little extra control over your values.

Upping the Complexity

This got me thinking, you could then make this much larger and generate some really useful data sets, if you needed to, and although my data was easy enough to mock up by tapping in a few cells of data, what if you wanted to dynamically generate data based on a few rules or limitations? After a quick search on the alteryx community I found a question with a really helpful answer by James Dunkerly, which is certainly worth breaking down and sharing.

Generating Measure Values Within a Given Range

Using the Random functions listed above, we can easily force our measures to be within a certain range; we need only to use them appropriately in the formula tool as such:

RandomInt( [MaxValue] – [MinValue]) + [MinValue]

And if you want it to be a decimal, why not throw on +Rand() to the above formula too.

Generating Dates Within a Given Range

The generate rows tool is great for mocking up data – take note of the grey input anchor, this means that an input with incoming data is not required. We can use incoming data in the logic to generate additional rows, or we can start with nothing and generate rows as we desire. In this example we choose a start date in the intialisation expression (ie start with this value)

I like to think of the generate rows tool in 3 steps (in a slightly different order to how the tool is presented but, each to their own…)

  1. Initialisation Expression – Start with this value (or expression to create the first value/row)
  2. Loop Expression – Perform this function on the initial value/row to create a new row (and continue…)
  3. Condition Expression – Do the same again until such time that this expression is false, then stop generating new rows

The above example will then start with the date ‘2010-01-10’ as the first row, then add a new row with the date one month later, until the final row equals the month of today’s date (truncated to the first of the month).

Bringing it all together

Hopefully you’ve nailed the basics from above now and can whip up basic datasets to your heart’s content. However, we could get a bit meta here and combine the techniques above for a more realistic set of data. By this I mean instead of creating uniform complete sets of dimensions by using cartesian joins, we could randomise the generation of the dimension sets.

Randomising Record Count per Member of a Dimension

Take for example, customers and transactions. Let’s say we want a dataset of one row per transaction per customer. In genuine data, not all customers will have had the same amount of transactions. So to generate data that is helpfully realistic, we can use the random function to generate rows that represent transactions.

First, we generate a list of Customer IDs up to a defined maximum, then, for each we generate a random integer, which is the number of transactions, thus new rows we require. We then use a second generate rows tool to generate these additional rows.

Randomising Dates

Again, a combination of these tools can help use randomise date data too.

First, generate the desired date range using the steps outlined above, then give each a record ID, and find the maximum. Append this maximum value to the data that needs random dates applied to it, and then use a formula tool to generate a random integer within the range of the number of dates available.

Finally, join based on this randomly assigned integer, such that all data points receive a random date.

Combining all these concepts can give you a helpful fake dataset pretty quickly, be it for proving a concept or building a dashboard before your final data is ready, I hope you find this useful for your own use case.

The workflow is available here to have a play around with for yourself.

Feel free to reach out on twitter or comment below with any questions!

The post Quickly Generate Dummy Data in Alteryx appeared first on The Information Lab.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The Tableau Server REST API is a hugely powerful resource that allows administrators to develop their own applications which automatically perform tasks on an Alteryx Server.

In this blog, I’ll detail how (using a ‘cheat’ method), we can quickly get a authentication token to perform such actions, without re-inventing the wheel.

But, before we start, below are just a few of many usecases that we have seen for interacting with the Tableau Server REST API using Alteryx.

1. Migrating content from one site to another.

2. Batch publishing workbooks

and perhaps the most common…

3. To publish a datasource

So why do we need an authentication token?

For security purposes, the Tableau Server REST API requires users to authenticate your login credentials. This process means that you will only get access to the resources appropriate to your Tableau Server account.

This initial ‘sign in‘ process, returns an authentication token which is valid for 240 minutes (though this can be changed using the Tableau Services Manager (TSM); this token is also only valid for a specific site.

Essentially it offers a short-cut so you don’t have to perform this ‘sign in’ step for every request you wish to make.

So how can I generate an authentication token in Alteryx?

I’m glad you ask, and I can’t take much credit for this, it’s all thanks to the Alteryx development team.

The first version of the Publish to Tableau Server tool gave users the ability to request an authentication token, rather than publish a datasource to Tableau Server; however, in later version of the tool, this functionality has been removed, Alteryx have their reasons for this, but I won’t get into what those are, let’s just trust them.

Now, not everyone knows that the initial first version of the tool, has any different functionality to the active version of the Publish to Tableau Server tool, so they won’t know this exists!

Downloading and Installing the Tool

So, lets start by downloading the old version of the Publish to Tableau Server Tool, which can be done via the Alteryx Public Gallery.

There are actually three versions available to download, we need version 1, and we need a version which is compatible with your Tableau Server, which is likely to be ‘V1.09.2’; from my exploration, it seems the latest version of the tool is now installed in the product by default (very much needed I think, so big up Alteryx on that one).

In order to get the download file, we need to run the analytic app, which will present the user with a download hyperlink; alternatively, if you like shortcuts, the download link is here.

This process will download a ‘.yxi’ file, or Alteryx Installer file; this is a special kind of executable file, which when triggered will unpackage the macro, and automatically place it into the directory…

C:\Users\USERNAME\AppData\Roaming\Alteryx\Tools

This process makes the tool available via the toolbar.

Using the Tool

The tool is available in the ‘Developer’ tab, but if you love to use the search like me, just searching the terms ‘publish’ or ‘tableau’ or of course ‘Publish to Tableau Server’, will surface the tool.

Once you bring it onto the canvas, you will get one of two interfaces, the interface shown in the first image, represents the latest version of the tool…

Latest version

The interface shown in the second image (below), represents the first version of the tool.

Version 1

Now, even though we’ve definitely downloaded the old version of the tool, Alteryx has been clever, and by default (providing you have it, which you will in version 2019.1 and onwards), it will present the user with the latest version of the tool.

However, we can downgrade the tool version, by right clicking on the tool and hitting the ‘change tool version’ option.

This process will downgrade the tool on your canvas, and the configuration pane will revert to the first version of the tool.

So, now we have the right tool on the canvas, lets get our authentication token.

In order to use the Publish to Tableau Server tool, we must have an input; I just use a text input with a single column and row, it really doesn’t matter.

On the tool itself, you should configure the connection settings as appropriate.

On the second tab (and this is where the magic is), there is the option ‘Request Authentication Token’, which, unsurprisingly, will generate an authentication token.

The image below highlights the option that we need to select in red; the image also highlights, in green, the authentication token, given in the field ‘X-Tableau-Auth’.

Now whilst that output header name may seem a tad random, it’s actually extremely useful, as you will discover when making any subsequent calls, the ‘X-Tableau-Auth’ is the header name for the authentication token, so when we use the download tool to make subsequent calls, we can simply check this field as a header option.

I’ve wrote this blog because I feel that those new to the tool will not realise this behaviour existed; and I’ve experienced first hand, people reinventing the wheel, simply because they didn’t know the tool existed at all.

Ben

The post How to get an authentication token to use the Tableau Server REST API in Alteryx appeared first on The Information Lab.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In Alteryx, certain tools in the platform are deprecated (basically killed), with new releases.

Now Alteryx aren’t stupid, they aren’t killing just any tools, they will be carefully monitoring usage and identifying tools in the stack which are both under-used, and difficult for them to maintain.

An example of this would be the ‘Twitter Search’ tool, which enabled users to get tweets which matched a specific search criteria (hence the name). This tool was useful, but it was useful to probably around 0.001% of Alteryx users, and of those who used it, they probably only used it in around 0.001% of their workflows.

Not only was in not used by a significant amount of users, but also Twitter could, and did, make changes to the structure of their API, which resulted in further work from the Alteryx development team to ensure the tool continued to work.

So how do I get them back?

Again, Alteryx aren’t stupid, they appreciate some of their users still may want to make use of the tools that they have seemingly removed from the product, and they give an option to those users to do so.

In order for them to appear in your tool palette again, users simply need to ‘Right Click’ on the toolbar and hit the option ‘Show Deprecated Tools’.

It’s really that easy to do (Y)

I should stress though, any issues with deprecated tools will not be addressed by Alteryx, and these should be seen as no longer supported.

With the development of the new tool interfaces, with certain tools, Alteryx have also made the option for users to downgrade to the old style interface, for example, with the Score tool, which forms part of the predictive tool suite, users can right click and hit the option ‘Choose Tool Version’; which will revert the tool to it’s old interface and logic.

That’s all for this post, and I hope it proves useful to those 0.001% of users.

Ben

The post How to show deprecated tools in Alteryx appeared first on The Information Lab.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In my latest client engagement I was working with a client to allow them to understand there performance at the lowest level, against a set of higher level benchmarks.

In Sample-Superstore terms, my Sub-Category managers, wanted to understand how they were performing against others in there category, and others within the group as a whole.

They also wanted to benchmark there performance against the previous year too.

In this post, I will highlight how a few simple LODs can help us perform this benchmarking process.

The End Game

The visual below highlights the ‘end game’ for this task.

The primary visual, details, for each sub-category, their performance over the latest 12 months.

The colour indicates the difference to the selected benchmark (as determined using the control panel on the left side).

Besides each table, there is an indicator (✓ and ✗), which signifies consistently good, or consistently bad performance over this 12 month period.

And the control panel allows my Category and Sub-Category managers to view only their product sets, without affecting the benchmarking results.

Click here to view the interactive viz on Tableau Public! Visual Build

Before we perform our benchmark calculations, let’s get our visualisation set-up.

I want to put the Month of my date field [Order Date], onto columns, and I want to put my sub-categories on rows.

Now, I only want to see the sales values for the latest 12 months, not the values for each month, over all time; rather than filtering to the latest year, I’ll write a calculation to do this (i’ll actually write a few).

1. [Date Aggregation] – DATETRUNC(‘month’,[Order Date])

//This calculation returns all of my dates truncated to a month level, for example, the date 2019-01-21 becomes 2019-01-01

2. [Latest Date] – { FIXED : MAX([Date Aggregation])}

//This calculation returns the latest month/year period in my dataset

3. [Last 12 Months] – DATEDIFF(‘month’,[Latest Date],[Date Aggregation]) >= -11

//This calculation returns a boolean flag for each row, determining if it is in the latest 12 months (in which case TRUE), or not (in which case FALSE)

4. [Sub-Category Sales] – SUM(IF [Last 12 Months] THEN [Sales] END)
//This calculation returns the sales values for the latest 12 months only

I can take my 4th calculation and bring this into the view on the text shelf.

You’ll see I have added a field titled ‘Month Order’ into the view; in this case it has no affect, because December is my latest month, but say November was my latest month, then I’d want December to be on the left side, rather than the right side, this calculated field, and it’s subsequent placement on the columns shelf allows me to achieve this behaviour.

5. [Month Order] – MONTH([Date Aggregation])-MONTH([Latest Date])

//Return the difference, in months only, between the latest date and the date given on each row.

LODing

Now we’ve created our view, let’s get ‘knee deep in LODs’, as we create our benchmark calculations.

Lets remember, for this example, I want to benchmark against;

a. The Company as a whole (i.e. the sales for this sub-category, vs. the average sales for all sub-categories).

b. The Category (i.e. the sales for this sub-category, vs. the average sales for the other sub-categories in my parent category)

c. YoY (i.e. the sales for this sub-category, vs. the sales for this sub-category, last year).

6. [Company Sales] – MIN({ FIXED [Date Aggregation]:AVG({ FIXED [Sub-Category],[Date Aggregation]:SUM(IF [Last 12 Months] THEN [Sales] END)})})

//This calculation requires a nested LOD. First, we create a fixed statement to return the Sum sales, over the last 12 months, for each sub-category and month. From this (fictional) table, we then perform a secondary fixed calculation, which returns the the average of these values, at purely the month level.


7. [Category Sales] – MIN({ FIXED [Category],[Date Aggregation]:AVG({ FIXED [Sub-Category],[Date Aggregation]:SUM(IF [Last 12 Months] THEN [Sales] END)})})

//This calculation again requires a nested LOD. First, we create a fixed statement to return the sum sales, over the last 12 months, for each sub-category and month. From this (fictional) table, we then perform a secondary fixed calculation, which returns the average of these values, at the month, and category level.

Now for the YoY benchmark, I have created two calculations.

8. [Previous 12 Months] – DATEDIFF(‘month’,[Latest Date],[Date Aggregation]) >= -23 AND DATEDIFF(‘month’,[Latest Date],[Date Aggregation]) < -11

//This calculation will return the value TRUE, if the difference between the latest date, and the given line is between -11 months and -23 months

9. [Previous Year Sub-Category Sales] – SUM(IF [Previous 12 Months] THEN [Sales] END)

//This calculation returns the sum of sales, providing the line date is within the Previous 12 Months time-frame.

Parameter Control

As noted in the ‘End Game’ section; I would like my end-users to be able to control which benchmark they compare against, to do this I will create a list parameter containing three values, ‘vs. Company’, ‘vs. Category’ and ‘YoY’.

I will then build a calculation which determines which has been selected by the user.

10. [Select Benchmark Control] – CASE [Select Benchmark] WHEN “vs. Company” THEN [Company Sales] WHEN “vs. Category” THEN [Category Sales] WHEN “YoY” THEN [Previous Year Sub-Category Sales] END

//This calculation uses a case statement to reference the parameter in all scenarios and then return the desired field.

The next calculation will now create the difference between the selected benchmark value, and the actual sales value for that sub-category and month.

11. [Difference to Selected] – [Sub-Category Sales]-[Select Benchmark Control]

//Take the selected benchmark value away from the sub-category sales for the current 12 months. A positive value would indicate they are performing better than the benchmark; a negative value would indicate they are performing below the benchmark.

Visual Prompt

The table could be quite overwhelming to the audience; as a designer, I wanted my users to be able to quickly acknowledge those that have shown either consistently good, or consistently bad performance, which I will define as 9 or more of the 12 months being above or below the benchmark figure.

The calculation to achieve this is as follows…

12. [Flag Calculation] – { FIXED [Sub-Category]:SUM(IF { FIXED [Sub-Category],[Date Aggregation]:[Difference to Selected]} > 0 THEN 1 ELSEIF { FIXED [Sub-Category],[Date Aggregation]:[Difference to Selected]} < 0 THEN -1 ELSE 0 END)}

//For each sub-category, and month, if the difference to the selected benchmark is positive, then return the value 1, if the difference is negative, then return the value -1. Then, at a sub-category value, sum these figures.

13. [Flag Icon] – IF [Flag Calculation] >= 6 THEN “✓” ELSEIF [Flag Calculation] <= -6 THEN “✗” ELSE “” END

//Calculate if the sum value is greater than, or equal to 6, then return a positive flag, because this would indicate 9 or more months of positive performance, if the value is less than or equal to -6, then this would indicate 9 or more months of negative performance. Finally, if the value is between -6 and 6, then that would indicate indifferent performance, which in this case, we have decided would not be worth flagging.

Polishing Off

The image above highlights how I have bought the ‘Flag Icon’ field onto rows to make it visible to our end users.

I have also created a customised tooltip to give detail regarding the benchmarks the user hasn’t selected.

Filters allow my users to present just the category or sub-category that is relevant to them, without affecting the benchmark scores (remembering we have used FIXED calculations, which are calculated before our filters according to the Tableau order of operations).

Finally I have created a dynamic title, which helps aid the users understanding in terms of what they are visually looking at, should they not understand the control panel on the left side of the view.

I hope this post is useful to some; it was certainly a fun project for myself!

Ben

The post LODing and Benchmarking appeared first on The Information Lab.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

A question we had internally recently was whether Tableau’s union function was by position or by field name. The ability to Union data together was brand new in Tableau Desktop 9.3

A union is where you would add additional rows by effectively putting two data tables on top of one another. Tableau’s help guide has a nice way of visualising this.

The best way to find out how Tableau was doing this by default was to test it using our good friend and ally, Excel.

Col 1Col 2Col 3
HelloFromThe

The first sheet I created ‘Col’ looked like the above.

123
HellooooFrommmTheee

The second sheet, imaginatively named ‘Sheet2’ was like the above. Note the different header names, to see whether it does match by name or by position.

As you can see above, the union matches by field name, so our two sheets did not match.

However, should you wish to match them by position, not field name, if you right click > ‘Generate Field Names Automatically’, Tableau creates dummy headers (F1, F2, F3 etc) and unions them by position!

Thanks for reading!

The post Digging into Unions in Tableau appeared first on The Information Lab.

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview