Loading...

Today I had an issue where I could not access my Essbase Cloud Service after I restarted the OAC instance. When I went to my URL, I got an error that my site could not be found. If you are having this issue, here are the steps I took to get my instance back up and going:

  1. SSH into your instance. If you need details on how to do this, look here.
  2. Once in, change your user to oracle by entering the following:
    sudo su – oracle
  3. Next, use vi to modify the capiurl file located in this directory: /u01/data/domains/esscs/config/fmwconfig/essconfig/essbase/
  4. Your URL will look like the following:
  5. Change the URL to be https and remove the :9000:
  6. Type :wq to save and exit vi (Note: on a Mac, click “Esc” then type :wq).
  7. Restart your instance and you should be back up and running!
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

A quick one here…

Here was my process learning where Essbase Cloud Service’s default data export location is located:

Following the instructions from earlier in the week on how to SSH into your cloud instance and run MaxL, run the MaxL command to export data from your cube.

Because we will continue using the command line, we need to logout and exit out of MaxL to get back to our normal command line.

I did not see my data file where I thought it would be – in the /u01/data/ folder – so I did a find on the box to see the location. I found it in the /u01/latency/app/GSC_Test9/GSCTest_9/ folder. When I tried to access the folder as the oracle user, I was denied permissions (not shown), so I changed back to opc user.

From here I migrated to where the data file was located but was denied permissions again. This time used

sudo su oracle –

And successfully accessed the file location (and verified it existed).

  1. I tried to download the file using scp but got rejected.
  2. I moved my file to /bi as my user had access to that folder (saw this via FileZilla’s permissions view)
  3. When I tried to download the file via scp in the new, good location I still had issues.
  4. I found that the file was not set to be allowed to be downloaded, so I changed the permissions to read/write (with great permissions comes great responsibility!).
  5. Finally, I gave up on scp and used FileZilla to copy the file to my local machine (below).

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Recently we had an issue on our OAC box where Oracle Support asked us to perform a series of steps to run scripts and obtain logs. This should not have taken days (really) to gather, but because the documentation is both incomplete and scattered, I thought I would document what actually worked so you can do this in a matter of minutes versus days.

There are really 3 parts to this blog post:

  1. How to SSH to OAC
  2. How to run MaxL via command line in the cloud
  3. Obtaining Essbase Cloud Service diagnostic logs for Oracle Support

They all build on each other so it makes sense to lump them together but I’ll still break them down into 3 groups so that if you need just one of the parts, you can access that portion quickly.

How to SSH to OAC

  1. The first thing you need in order to SSH to OAC is to have a copy of the private key on your local machine. When you created your OAC instance, you should have been given the opportunity to download both the private and public key.Record the location of the private key.
  2. I am connecting via a Mac, so I have the built in luxury of SSH capabilities. If you are running Windows, you will need to download puTTY (or something similar) so you can access the boxes.
  3. Once you have all the initial steps in place, you will need to enter the following on the command line (for Linux/Mac) users
    ssh -i “file location for private key” opc@IPAddress
    Example from below (customer and IP address identifiers removed):
    ssh -i “/Users/scz/Desktop/Clients/ClientA/sshbundle/privateKey” opc@129.1.1.1

    The private key acts as your password for user opc.

  4. The first time you do this, you will get an error stating that your permissions are too open. This is an easy fix that needs to be applied to your machine – enter the following 2 commands, customized to your private key location:
    sudo chmod 600 /Users/scz/Desktop/Clients/ClientA/sshkeybundle/privateKey
    sudo chmod 600 /Users/scz/Desktop/Clients/ClientA/sshkeybundle/privateKey.pub
  5. Once you have done this, you should be able to use the ssh command again with success!

How to run MaxL via command line in the cloud

  1. Once you have SSH’ed to OAC you can access a multitude of options, files, and commands. We are concerned about MaxL. The first thing you need to do is run as user oracle by using the following command:
    sudo -su oracle
  2. Next, run the startMAXL.sh script by using the following command (note: the period in front of the slash mark tells the system to run the script):
    ./domains/esscs/esstools/bin/startMAXL.sh
  3. From here, you can run normal MaxL commands.

Obtaining Essbase Cloud Service diagnostic logs for Oracle Support

  1. Once you have SSH’ed to OAC, change your user status using the following command:
    sudo su oracle –
  2. This is where my experience differed from Oracle documentation. I had to migrate to the directory where the script was stored:
    cd /bi/app/public/bin
  3. Once here, I could run the Python script:
    python collect_diagnostic_logs.py /tmp/EssbaseLogs.zip**Note that adding “.zip” to the end of the EssbaseLogs ended up being redundant (again, documentation stated differently) so my actual file name ended up being EssbaseLogs.zip.zip.

    The process will complete.

  4. Supposedly you can use the scp command to download your zip file, but I cannot do so (and Support isn’t sure why), so I decided to use FileZilla to download my log file.In FileZilla (or puTTY), you will need to do something special for your connection… Create the following:

    1. First, create a special connection versus a Quick Connection.
    2. Enter your OAC IP address.
    3. Choose SFTP as the protocol.
    4. The port is 22.
    5. Choose key file as the logon type (this is password, basically).
    6. The user is opc.
    7. Browse to your private key for the key file.
    8. Click “Connect”.
  5. This will bring up your machine so you can migrate to the log file location then copy to your machine.
  6. Upload that file to Oracle Support and you are all set.
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

When I saw that Data Sync 2.4 for OAC was released this morning, I decided to test the data load process to an EssCS cube. I hoped it would be a straightforward process, but – in all honesty – it took me 2 ½ hours to get it up and loading. Below are some of the steps and missteps I took to installing and using Data Sync 2.4…

First, I downloaded the file from: https://t.co/o0eR3HII7x . …That part is straightforward.

I extracted it to my location of choice (full disclosure…I ran into some issues so I changed my extract location a couple times until I figured out the issue. So, if you see different install folders, you will know why!).

Because my VM was new, I had never set the JAVA_HOME environment variable. Of note, for OAC Data Sync, you must reference the JDK installation. JRE is not going to work!

In the config.bat file, edit the JAVA_HOME to the location you just created in Environment Variables.

Here is where I lost a LOT of time. I could NOT get Data Sync to start! The batch file would close and no Java application would open. I put a pause at the end of the datasync.bat file to see what the errors were… What I got was:

Error: Could not find or load main class com.siebel.analytics.etl.client.view.EtlViewsInitializer Replication

Now, I am no Java expert (a novice’s novice, perhaps, at BEST) so I did a LOT of research. I saw that the Java class could not be found, which is held in a path using the DACCLASSPATH variable.

The config.bat file is where the DACCLASSPATH is set.

So I added the full file path:

In fact, I added it to every location in each batch script referencing config.bat, like the following:

Once I updated each batch file, I ran the datasync.bat script to create the Data Sync environment.

I missed the screenshot on this one…where you choose between creating a new environment or connecting to a previous one, but I chose to create a new one.

Next, you are asked to create a password for DS:

You will wait a few seconds for the files to create before you get an all clear.

Next, you will run datasyncClient.bat to start setting up the actual data synchronizations.

If you chose to save the password earlier, it will be entered for you. Otherwise, enter the password.

I wanted to create a new Project (data synchronization between an Excel file and EssCS) so I gave it a name:

The first thing I needed to do was create my connection to EssCS. Note that the URL is http://ipaddress:9000/

My connection tested successfully, so I’m good to move on.

On the Project tab under Source Data, I am choosing Data from Object(s) as it’s an Excel file. So I set the location of my Excel file.

I am testing with just one record so I set the parameters as such:

I created a new “Target” name…aptly “TestLoad”.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This is the second post in a series about the science behind Oracle’s Data Visualization. This post continues the dive into the Advanced Analytics tab. As a reminder, the AA tab is located here:

I’m going to continue using my Boston Marathon finish times as the sample data set. Recall that some people have won the BM more than once so to distinguish between each of their finish times, I have added a column called “Name Unique” to treat each finish separately.

The first thing I want to show are the options for Clusters in DV. In my Scatter (Category) diagram I have a simple visualization of each finish time by name. I’m going to add Clusters to Color.

Original:

Clusters:

In the lower right-hand corner, we see the details behind our Clusters when we click the AA tab:

By default, we get 5 clusters using K-Means with each data point being evaluated individually. The other option we have for Algorithm is Hierarchical. We can choose as many groups as we would like (although 1 would not make sense because then we are back to our original visualization). We can also choose between Cell, Rows, Columns, or Rows & Columns for our Trellis Scope. Let’s start with the math behind K-Means versus Hierarchical.

K-Means

Before we address the math, I think you will be surprised to learn there is less defined math in Clustering than you might think. It’s more an art with some science thrown in the mix. And it’s truly a process versus a formula. The process is as follows:

  1. Choose the number of clusters you would like in your data.
  2. Make as best a guesstimate as to the centroid (center) of each cluster. These do not have to be actual values in your data.
  3. Take the standard Euclidean distance from each data point to its centroid.
  4. Recalculate the centroids by taking the sum of all the means of all the data points assigned to a centroid’s cluster.
  5. Keep doing 2-4 until a stopping criteria is reached:
    1. No data points change clusters.
    2. The sum of the distances is minimized.
    3. A prearranged number of iterations is reached.

Not quite as scientific as you thought, huh? Part of the reason is because clustering is used as a data exploratory tool and has no general theoretical solution for the optimal number of clusters given a data set. The theory behind K-Means is an optimization algorithm and the solution will be optimal to the initial configuration. If the initial configuration changes, then there could be a better optimal solution. Note that K-Means is susceptible to Outliers. I added Outliers to my visualization as a shape to show the options…similar to Clusters. Notice that the Outliers occur in the top and bottom cluster strata. We will visit Outliers later, but I wanted to show the relationship.

Hierarchical

This one is MUCH different than K-Means and involves no math. In fact, it is – more or less – eyeballing close relatives in a set of individual data points. If we take a look at the diagram below, it kind of looks like a NCAA March Madness bracket. And, actually, that is essentially what the basketball brackets are – hierarchical clustering. Back to the diagram… So what is the process to go from the individual data points to the end result, one cluster?

  1. Each individual data point is its own cluster. This would be the far right of the diagram.
  2. Look at the data points and group together ones that are close.
  3. Keep merging similar pairs until you have one cluster.

See, there is no real function to use as a base here, unlike with K-Means. If you want k number of groups, then just remove the k-1 longest links.

(Thank you to the link for the visualization!)

Let’s change our visualization to be Hierarchical and keep K-Means at the top for reference.

We see that there is considerably fewer points in clusters 3-5. In the Hierarchical visualization there appears to be a hierarchy, going from bottom to top. So which one to use? Depends on your goals and data. If you are looking for data that might be Outlier-ish, then K-Means may work better. If you are looking for stratifications in your data, then Hierarchical might work better. …But you know your data better than I do!

To finish off the options, you can choose how to cluster the data based on the cell value, rows, columns, or both. The default is Cell and the data points are evaluated individually. With Rows, the values are evaluated on a row basis (ie: Nationality). With Columns, the values are evaluated by the chosen Column(s) (ie: Gender). If you choose Rows & Columns, then it is a function of those items. Because our data set is not ideal for showing these options, it might help to see a visual of what I just described:

Going back to our Scatter visualization, here is Cell for base visualization:

If we change this to Rows, we don’t get a change.

However, if we change to Columns, we see the Men’s row change quite a bit.

When we choose Rows & Columns, the visualization doesn’t change since only Columns produced a change in clustering.

Hopefully you feel a bit more educated on clustering. Too bad it’s more an art than a science!

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This is the start of a new series that I’m excited about diving into. Back in college one of my degrees included a concentration in Quantitative Analysis. I loved statistics since my first exposure to it in high school and found that I did pretty well at the subject. Statistics has always come naturally for me but found it hard to get a job in the area without at least a Masters degree. I started getting excited about 2-3 years ago when statistics, namely advanced analytics, started becoming a staple in many IT and functional organizations. Data visualization is part of this staple and it’s not just…data art. It’s very much a science that plays well to the artists inside us.

When I teach people Data Visualization, I find they try to jump into visualizations too fast without understanding why or how they work. They want to see pictures, quickly and making their data make sense on the first try. …I hate to break it to you, but it almost never works like that. While the incoming college hires are more data savvy than the ‘old guard’, there is still a fundamental education that needs to occur about the different types of data out there. While this post (and series, at least initially) will not be diving into the different data types, it is important to understand the difference between facts/measures, attributes, dates, and spatial data.

This series’ focus is on explaining the options you have available to you in Oracle Data Visualization. You have considerably more statistical power available to you than you might imagine!

The first stop on the analysis train is Trend. You may think you already understand this concept since we throw the word around as common language – “The stock market trend is going up”, for example. But within DV, you have different Trend options available to you whether it be the Method (Linear, Exponential, or Polynomial), the Degree, or the Confidence Interval (90%, 95%, 99%, off).

Trend can be found on the Advanced Analytics tab to be added to a visualization containing a date (as shown by right-clicking on “Trend” to add it to my visualization.

My data is set is the time of the male winners of the Boston Marathon since 1900 (I’ll add women later, but because of sexism, they were not added until the late 1960s) in minutes. When I plot the data with Trend added, I get the following result:

The arrow shows the automatic trend line that was added to my visualization. I have put a box around the details behind my trend line. By default, the trend line is a Linear line with a 95% confidence interval. But what are the other options? If you open the Method drop down list, you will see options for Linear, Exponential, and Polynomial. But what do each of these mean? Let’s take a look…

Let’s start with our default, the Linear trend line. Not to go into the math behind the line, but (yep, I’m going there) the formula is one that you will recognize from junior high:

Y(t) = a + bt

Or, more commonly,

y = mx + b

y = ȳ – slope * x̄

where:

a = b = intercept

b = m = ( S((x – x̄)/(y ȳ))) / (x -x̄)2

y = dependent variable (time is required for y to have a value)

t = time; independent variable (will occur with or without y)

So, why is (any of) this important?

A linear trend creates a line. While somewhat helpful when the data points seem to follow a straight line, it’s not often appropriate for most types of data. This is especially true if, especially towards the end of the line, the actual data points are not even in the Confidence Interval band (wait for it…). If it is true that the end of the data points are not in the CI band, then how can you accurately forecast?? This brings me to the 2 other types of trend methods available…

The next on the list is Exponential:

The Exponential method is especially useful if you see sharp growth or decay in your numbers (but cannot be used if there are zero or null data points in your set!). If I change to this method, you will see that my trend line is starting to get a curve:

Again, we are at our default 95% CI. Because we have no sharp incline or decline in our data, we won’t see the real value of this method.

The formula for Exponential Trend is:

Y = Arx

Where:

r = 10m where m is the same formula as in Linear for slope

A = 10b where b is the same formula as in Linear for intercept

In our example, if we take the first 10 years and math it all out in Excel, here are the results:

The bottom line is the regression. If we filter for only the first 10 years, we can see our trend line change because there was some volatility in 1909 (which from research I found that it was 100*…YES, ONE HUNDRED DEGREES…on the day of the race!).

Let’s move onto the Polynomial trend line… When I add this Method to my graph, I get a more true trend over time. We also get a new option: Degree. We will get to that…

We can see that more data points fall in the 95% CI band. This is because this method allows for more data fluctuations over a large data set. For financial data, this would be my recommendation. Actually, in general, I opt for a Polynomial trend.

The formula is more complicated…

Yi = b0 + b1ti + … + bptip + ei

Sigh. I could go into more details, but suffice it say that the formula is an extrapolation of the Linear trend to account for variations in all the data points around a 3 dimensional axis (vectors for those who have had Linear Algebra). In fact, if you have had Linear Algebra (sneaked by, myself) you might notice you can create matrices to do the math by using the formula:

bhat= (X’X)-1 X’y

Where b hat = (transpose of the matrix times the original matrix) to the matrix inverse times the transposed matrix times y. Yeah, I’ll spare you.

This is where the Degree gets important…it determines how many iterations of the above formula you will do and to what degree (aha!). A first degree polynomial is called a straight line, or linear regression. A second degree polynomial is call a quadratic regression (or exponential for our purposes). The next degree, 3rd …which is why the Degree option starts at 3… is called a cubic polynomial. The 4th, quartic and so on.

But why would you want to change the Degree? It depends on how many “valleys and peaks” you want in your trend line. You will have n-1 valleys and peaks for the Degree you choose. Since it defaulted to 3, I have 2:

If you have a full year of data, you may want to choose a Degree of 4 to show how each season is represented in your graph. See, it is all making sense! If you’ll permit me to get a bit nuts, I’m going to change the Degree to 7. I will expect 6 valleys and peaks in my data:

Although subtle at the end because the times have, more or less, normalized, I can still pick out my 6 waves:

You’ll never going to settle for the default Trend options again, are you?

Let’s go back to Confidence Intervals… A CI is the probability that a given recorded value will fall within a certain range. The calculation is pretty simple:

CI = x̄ ± z (s / Ön)

If we want to choose the CI for the first 10 years of the Boston Marathon finish times, we will get

n = sample = 10 years

x̄ = mean = 158.316 minutes

s = standard deviation = 9.2311

z-score for 95% = table lookup = 1.96

CI = 158.316 ± 1.96 (9.2311 / Ö10)

CI = 158.316 ± 5.7215

There’s our CI for the first 10 years – 95% sure that the numbers will fall between 152.5945 and 164.0375. You might also recognize this as the margin of error you keep hearing about…

If you want to increase the CI, you will notice the band get larger. You can also turn it off to focus on the trend and not trend and CI.

Hopefully you have a better appreciation for what the Trend Methods and Degrees can do for you. Do NOT just stick with the Trend defaults! You are missing out on key data analysis results!

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Real Tri Geek by Sarah Craynon Zumbrum - 4M ago

Background: I recently performed a POC where the customer was running Essbase 6.5 (yes…I was in junior high when 6 came out!) and wanted to see the level of complexity of migrating to the cloud so they could utilize BI and DV with Essbase (as well as the additional Essbase functionality).

I am cautious when I tell people that it’s pretty simple to move to the Essbase cloud because it seems each organization has a ‘gotcha’ of some sort. This one was no different… Because they were using Essbase 6, I could not use some of the new, fancy tools to extract Essbase artifacts to a cloud ready tool because…it’s old. Just to open the outline was a bit of work itself because I didn’t have a ready on-premises Essbase environment available and I wasn’t even sure if a version 6 outline would load to, say, 11.1.2.4. I even tried creating a shell application in EssCS to see if I could import an otl file, but OAC does not accept that as a file type to upload…

(Note that you cannot use the OAC “dbxtool” to extract an outline from an OP version of an Essbase cube that is version 6.)

Finally, I reached out to an Essbase friend for a copy of AppMan (Application Manager) from back in the version 6 days and he came through to save the day! I was able to view the outline for any gotchas (none, and didn’t expect any). Since the otl didn’t load, I was curious is my rules files would (1) load and (2) be able to be used without modification. The answer was yes to both! This made loading dimensions and data MUCH easier since the older style is apparently still in use in EssCS… (Don’t even ask me about creating new rules files in EssCS…)

Below is how I planned to tackle the lift and shift (LnS) from Essbase 6.5 to EssCS:

  1. Open otl in AppMan to verify structure and outline aggregation.
  2. What dimensions are static and need to be built manually either in EssCS or in the Cube Builder worksheet?
  3. Create the static dimensions in the Cube Builder workbook.
  4. Load the Excel workbook to OAC including the skeleton dimensions.
  5. Load the dynamic member text files and rules files to OAC
  6. Load data
  7. Create and run “default” aggregation
  8. Check to see if data aggregated correctly

I had the customer export the artifacts from their Essbase cube:

Note that is cube is, what I referred to as, “Vanilla”. They said “Kindergarten”…not my words! J

Okay, to my planned attack method…

In AppMan (thanks, CubeCoderDotCom!), I see that most dimensions are all summed to the top. The only one that is not is the Scenario dimension when comparing the Actual vs Budget.

Of these dimensions, the following are static versus dynamic, meaning updated each time the cube is updated. I’ve also added if each dimension is Dense or Sparse as an FYI.

  • Fund – Static; Dense
  • Cost – Dynamic; Sparse
  • Organizational – Dynamic; Sparse
  • Functional – Dynamic; Sparse
  • Responsible – Dynamic; Sparse
  • Year – Static; Dense
  • Scenario – Static; Dense

I download the BSO outline workbook to have a place to start:

Next, I started filling it in:

Essbase.Cube worksheet:

Cube.Settings worksheet:

Note that I changed the “Aggregate missing values” and “Two-Pass calculation” to “Yes” and also added our Alternate Alias Table so we have a placeholder.

At this time, I am not going to add any Generation names to my levels, so I cleared the Cube.Generations worksheet.

Now we start the dimensions… Since all 3 of our dense dimensions are stored, I’m going to go ahead and build out the outline while for the remaining, sparse dimensions, I’m going to create the shell so we can load them later via text and rules files.

Dim.Year:

Dim.Scenario

Dim.Fund

Dim.Cost

Dim.Organizational

Dim.Functional

Dim.Responsible

I deleted the data worksheet since we will be loading that via OAC.

Let’s see how good my skills are with an upload of this workbook to OAC.

An error: Property “Dimension Name” must have value in sheet “Cube.Generations”.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Real Tri Geek by Sarah Craynon Zumbrum - 4M ago

Heraclitus was and is still correct. Seems that whenever I make a statement regarding permanence, I am forced to retract. That’s not always a bad thing. If I have talked to any of you over the past couple years, you have heard me state how much I loved working at Oracle and how I could see myself retiring there in 20 years. I loved my job – Public Sector BI and Analytics pre-sales – and the people I worked with/for. However, life presents interesting opportunities sometimes.

I started out at Oracle focused on BICS and OBIEE. I had gotten pretty burnt out on EPM consulting and wanted to work with new technology and with scenarios where I could fully use my undergraduate degree. Although I used the Information Systems part of my degree regularly, I really missed not using my Quantitative Analysis degree. My position at and the direction of analytics at Oracle afforded me the opportunity to do statistics as part of my job as well as use the R training I received in college (although I had to relearn the code basics as that was 14 years ago…). I remembered how much I loved analytics. I loved the math and visualizations. I loved working with organizations that were trying to implement a new or upgraded BI and/or analytics system. I really enjoyed presenting Data Visualization and watching people’s eyes light up when they saw how easy it was to use and build visualizations from an Excel document. I loved working with the DoD and Federal teams. I felt like I was back at home with my “people”. The Sales Reps I worked with were top notch. My teammates were fun, driven, exciting, and hard-working. There was never any competition, only collaboration. I loved my job. In fact, it was the best 2 years of my career and I hope the rest of my career will live up to my time at Oracle as I never felt I was going to work one day in the 2 years as an employee.

But sometimes an opportunity comes up out of the blue for you to stretch your wings a bit and throws you off the course you had planned through retirement. Starting January 18th, I will be the VP of BI and Analytics for Accelytics. I expect this to be an interesting position as I will be building up the BI and analytics team and working with customers implementing these tools. I’ll be working with Oracle Analytics Cloud immediately and that excites me given the tool offering is still less than a year old and evolving rapidly. I hope to have more time to dedicate to writing more OAC blog posts. I’ll still be involved in the conference world so I’m not going anywhere! If anything, you may see/hear more from me, like back in my pre-Oracle days. I could not be more thrilled that I am still working with Oracle technology. Accelytics is an Anaplan partner and I look forward to learning that technology, as well. Always good and fun to learn new technologies!

With that said, I hope you’ll welcome me back to the Oracle partner/customer world with open arms. I’m excited to be back and continue my work with and for Oracle with a partner for the community!

Upcoming Conferences and Presentations:

Indiana Oracle Users Group: January 26, 2018, presenting “The New BI”
Anaplan HUB: March 5-7, 2018, as an Exhibitor
Analytics and Data Summit: March 20-22, 2018, presenting “Oracle & R – Advanced Analytics on Steroids”
Kscope 2018: June 10-14, 2018, as a Bronze Sponsor (and hopefully presenting!…TBD)

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Real Tri Geek by Sarah Craynon Zumbrum - 4M ago

Okay, I don’t really hate Wayne’s beard, but I hate what it stood for when we were on a project together 4 years ago. Before I go into why, let me give you the background on my history with Wayne and Cindy.

I was thinking last night about when I met Wayne. I honestly could not remember ever “meeting” him. It’s like he has always been in my life in some way or form for the past 5-ish years. I don’t know if it was through ODTUG, Kscope, the Oracle ACE Program, Oracle, or …what. I just know that when we were on a project together in Minneapolis (IN THE DEAD OF WINTER) 4 years ago, I already knew him. Being on the project together, seeing him get his ACE status during that time, working together for Kscope items, and working for the same company allows you to get to know someone. During dinners, sitting at the airport, and other times together, he always talked of his wife, Cindy. She was the joy of his life and he loved talking about her. One thing he talked about was Cindy having cancer and that was growing out his beard until she was declared in remission. To me, it was a physical reminder of his internal pain. I hated that he had that beard.

I rolled off that project in March and didn’t see him again until June at Kscope. Almost immediately, I noticed that he was cleanly shaven. I gave him a sideways look and asked, “Your beard?!” And a proud joy came across his face when he yelled, “YES! It’s gone!” A big hug soon followed. I reveled in the joy I got to share with him that afternoon.

Thanks to Facebook, I got to connect with Cindy soon after this and have fun with her and Wayne virtually. I thought often about how I wished we lived closer as I really enjoyed her and Wayne. That fall at Oracle OpenWorld I got the opportunity to see Wayne again and called him my date that night with Cindy’s permission. She joyfully gave it and we got a great picture to share with her (notice his beard is back, but for no particular reason this time).

The following summer, Kscope was held in Chicago and I found out that Cindy would be joining. I was thrilled to finally meet her in person! We got a group together for dinner post-Kscope to meet and unwind after the conference. She was even lovelier in person than virtually. I kept thinking that she has an amazing energy that was pure joy and happiness. I again lamented that we did not live close to each other.

Last year I got the news that Cindy’s cancer had returned. I was heartbroken and immediately asked God why and for healing. I had gotten to know her even more via social media and was always enjoying her posts traveling for historical reenactments, finding antiques, and loving on her son, Clark, and Wayne. I was able to get together with Cindy, Wayne, Mike Riley, and his wife, Lisa, in October while visiting for an Oracle conference presentation. As usual, there was lots of laughing and wishing we all lived closer. This was the first time I have seen Cindy in person while she was going through cancer treatment and one thing clearly stood out to me – this woman can find joy in anything. When talking about things going on with treatment, she nearly always had a smile and always found something positive. This really hit home because if she can find joy in cancer treatments, I can find joy in life’s easier curveballs.

My heart is heavy knowing Cindy is in pain and time is short with her. But she will live on for many, many years. My heart is heavy knowing Wayne and Clark’s bright light is going through a dim period.

Cindy, I pray that I can learn to find joy in things the same way you do. You’ve made me smile and laugh more than Facebook can let me “like”. Your love for your husband, Clark, family, and life is radiant. You are loved. I’ve decided to dedicate my racing season to you. Your name will be marked on my body each race. My race season’s mantra is “Be the Storm”. I know with you as my lightning that I will be the storm I strive to be in my races. I can’t wait to see what we do together this year!

GoFundMe for the Van Sluys’: https://www.gofundme.com/vansluys

Justin Biard’s Ode to His Friend Wayne: https://icodealot.com/my-friend/ 

Mike Riley’s Thoughts: https://realtrigeek.com/2018/01/04/its-time-to-rally-the-oracle-community-for-the-van-sluyss/ 


Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Thanks to Mike Riley for the following post for the GoFundMe page for Wayne Van Sluys and his family.

Cancer Sucks! I know, I’ve been there. And many of you were there for me and my family. Starting with my friend Chet Justice, the community rallied around us, and did countless good things for us. I was blessed to survive (just celebrated 4 years post surgery and in remission). Sending us to a World Series game to take our minds off of cancer for a night, helping us pay bills, sending us cards, feeding us (thanks Kellyn and Tim!), sending us care packages, etc…It was amazing the way the community rallied, and amazing the support and love we felt.

Fast forward to today, and I feel it is time to send out a cry for help again, this time for a dear friend. Many of you know Wayne Van Sluys from ODTUG and KScope. Wayne is about the nicest guy I know. It breaks my heart to know that Wayne’s wife Cynthia has been battling cancer for the past couple of years. Shortly before Christmas, they received the devastating news that Cynthia’s cancer had spread and that chemotherapy or surgery would not be advisable. Cynthia has been sent home and is under hospice care now. Cynthia and Wayne have been married for many years now, and they have a son Clark who requires 24 hour care. They are a proud, happy, and loving family, and they need our help.

How can you help? If you know the family, reach out to them, send them a note, anything. I speak from experience unfortunately in that even the smallest gestures will mean a great deal to them. Don’t ask them what you can do for them. If you think of something, DO IT! Trust me, they won’t say no. Anything will help them, and will make you feel like you are making a difference (because you are).

Additionally, there is one more thing for you to do. If you can afford any amount at all (no donation is too small or too large), we (a group of Wayne and Cynthia’s friends) have set up a GoFundMe account for them. From the page “The funds raised by this campaign will help pay for: outstanding medical bills, memorial expenses, and continued care for Cindy. Remaining funds will be used to help pay for Clark to go to summer camp (www.wonderlandcamp.org)“.

 The address is: https://www.gofundme.com/vansluys

Please consider helping these folks out. Show them how a community always rallies to support one another. You’ve done it before. Let’s do it again. Thanks for reading!


Read Full Article
Visit website

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview