Loading...

Follow Altair Innovation Intelligence | Corporate Blog on Feedspot

Continue with Google
Continue with Facebook
or

Valid

This guest contribution on the Altair blog is written by the ESRD team, a member of the Altair Partner Alliance

With the increased usage of finite element analysis (FEA) software tools for virtual prototyping of new and/or modified engineering designs, and the growing practice of benchmarking FEA results against available experimental data or engineering handbook solutions (i.e. “benchmarking-by-FEA”), it’s important to revisit what steps we must consider before performing an engineering simulation by numerical methods. After all, if we don’t plan ahead, we may find ourselves in a “garbage in, garbage out” situation!

Engineering Simulation Considerations: What Questions Should WE Ask And Why?

Typically, engineering analysts are fully aware of the following considerations when defining a mathematical model for a structural simulation that will be used for comparison with experimental test data or engineering handbook approximations:

  • Is the CAD geometry an accurate representation of our actual part/assembly?
  • Do we have all needed material properties?
  • Are the loads and constraints fully understood and can they be properly defined in the engineering simulation?
  • Is a linear elastic analysis adequate for the goal of the analysis, or do we need a nonlinear analysis and if so, which type?

We know that if any of the above are not clearly understood then the outcome of the effort may result in an ill-defined simulation that will not help the engineering decision process. Or worse, provide false or misleading feedback about the engineering simulation.

That said, the following aspects may not always be considered by engineering analysts in production environments but are critical for establishing confidence in the solution:

  • Are we solving the right set of engineering equations?
    • In other words, are we idealizing the model correctly?
  • To what accuracy is our solution converged?
    • In other words, are we solving the engineering equations, right?
  • Does our FEA software provide simple means to show convergence, or is it a time consuming and difficult process?
    • Do we require multiple mesh refinements to show that the answers don’t depend on the number of elements or degrees of freedom? And these additional refinements are not done automatically by the FEA software tool?
  • Does our FEA software automatically average the results across element boundaries, such as stresses or strains?
    • And, do discontinuities in stresses or strains appear if we disable nodal averaging?
  • Are our FEA results highly sensitive to element types?
    • Do the results change if we modify the element integration scheme or hourglass control?

When was the last time you heard all of the above questions asked in a design review? And, why are these topics even important? Clearly knowing if the engineering simulation was performed using the appropriate modeling assumptions (problem idealization) and verifying that the simulation results have converged (solution verification) are essential aspects of the calculations.

Therefore, we need a clear set of “quality checks” for verifying the accuracy of engineering simulations so that engineering analysts can trust the information produced by the mathematical model and confidently perform “benchmarking-by-FEA” workflows.

Key Quality Checks for Verifying the Accuracy of Engineering Simulations

In a recent Altair webinar, we asked a simple but powerful question: if you routinely perform Numerical Simulation via finite element analysis (FEA), how do you verify the accuracy of your engineering simulations? During this webinar, we reviewed ‘The Four Key Quality Checks’ that should be performed for any detailed stress analysis as part of the solution verification process:

  • Global Error: how small and at what rate is the estimated relative error in the energy norm reduced as the degrees of freedom (DOF) are increased? And, is the associated convergence rate indicative of a smooth solution?
  • Deformed Shape: based on the boundary conditions and material properties, does the overall model deformation at a reasonable scale make sense? Are there any unreasonable displacements and/or rotations?
  • Stress Fringes Continuity: are the unaveraged, unblended stress fringes smooth or are there noticeable “jumps” across element boundaries? Note: stress averaging should ALWAYS be off when performing detailed stress analysis. Significant stress jumps across element boundaries is an indication that the error of approximation is still high.
  • Peak Stress Convergence: is the peak (most tensile or compressive) stress in your region of interest converging to a limit as the DOF are increased? OR is the peak stress diverging?

When the stress gradients are also of interest, there is an additional Key Quality Check that should be performed:

  • Stress Gradient Overlays: when stress distributions are extracted across or through a feature containing the peak stress, are these gradients relatively unchanged with increasing DOF? Or are the stress distribution overlays dissimilar in shape?

All these Key Quality Checks are incorporated and simple to use in ESRD’s StressCheck Professional, available via the Altair Partner Alliance here. The following 6-minute video demonstrates how to use StressCheck Professional to perform “benchmarking-by-FEA” for a practical case study: Watch the video here

In the video, a benchmarking-by-FEA case study is performed for a tension bar of circular cross section with a semi-circular groove. The goal was to compute the 3D stress concentration factor by classical approximation (Walter D. Pilkey’s ‘Peterson’s Stress Concentration Factors’, Section 2.5.2) and Numerical Simulation (StressCheck FEA) for several Poisson’s ratio values and demonstrate the effect of Poisson’s ratio on the 3D stress concentration factors.

Interested in learning more? Watch the ESRD/Altair on-demand webinar “How Do you Verify the Accuracy of Engineering Simulations?” now!

The post Benchmarking by FEA: Best Practices & Key Quality Checks to Verify Results Accuracy appeared first on The Altair Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Does using an illegal corked bat really improve your chances of hitting a home run?

This post was written by Will Haines, Eric Nelson, and Dmitri Fokin

This week’s Major League Baseball All-Star Game and Home Run Derby will showcase baseball’s biggest and brightest stars. Seeing hitters launch baseballs unfathomable distances into the bleachers got us thinking, are there ways hitters could gain an edge over the competition? What if we could simulate just how much of an effect this would have on their performance?

In 1994, Albert Belle of the Cleveland Indians was one of the biggest power hitters in baseball. But he was hiding an illicit secret to his success. Before a July game in Chicago against the White Sox, the opposing manager was tipped off that Belle may have been using an illegal corked bat. Umpires confiscated the bat and locked it in their dressing room.

Knowing the bat would soon be tested and his cheating discovered, Belle’s teammate Jason Grimsley crawled through an overhead crawl space connecting the visitor’s locker room to the locked umpire dressing room, lowered himself down through a displaced ceiling tile, and switched Belle’s bat with a legal bat. Evidence of this caper was quickly discovered. The new bat had the signature of teammate Paul Sorrento on it, two ceiling tiles were askew overhead, and the floor was littered with clumps of ceiling insulation. Albert Belle was suspended for 10 games, and the corked bat heist went down as one of the most colorful stories in baseball cheating history.

Since the beginning of the sport, baseball players have been looking for every competitive advantage to enhance their performance. One of the oldest, and most debated, practices in baseball’s rich history of cheating is the “corked” bat. Major league bats are typically made of solid wood from maple or ash trees. A corked bat is an illegal modification made by drilling a hole in the bat’s barrel and filling it with lighter, less dense material like cork. This lighter bat is thought to allow the hitter to swing the bat faster while improving the hitter’s timing and possibly even creating a spring effect to improve power.

With our modern understanding of physics and advanced simulation technology, we wanted to put the corked bat theory to the test. Does a modified cork bat really help batters hit more home runs? We used nonlinear finite element analysis technology to find out.

Boundary Conditions

In the first simulation experiment, we compared the performance of a solid wood bat made of ash wood to a bat with a cork insertion using the same swing and pitch speed conditions. we used a rotational speed of 44.6 radian per second (rad/s) to correspond to a 70-mph bat velocity, roughly the average swing speed of a major league hitter.

Boundary conditions set on the bat for initial rotation around a pivot point and initial velocity

The initial velocity of the ball was set to 40.23 meters per second, equivalent to a 90 miles per hour pitch. The baseball weighs 141.7 grams and is comprised of white cotton, grey woolen yarn, white woolen yarn, red rubber, black rubber and a cork core. All of these are viscous hyperelastic materials (Ogden material law).

Cross-section of baseball simulation model

The lighter cork material made the corked bat 72 grams lighter than the solid wood bat, but otherwise, all other variables are constant in the two simulations.

Experiment 1: Comparison of Solid and Corked Bats Swung at the Same Speed

The simulated maximal contact force of the solid ash bat was 35.241 Kilonewtons (KN) and 35.660KN for the corked bat. The ball’s final velocity hit with the legal bat was 91.7 mph, and baseball velocity hit with the lighter illegal bat was 89.47 mph, a difference of about 2.5%. It’s clear from this first simulation that a solid wood bat would perform slightly better than a lighter corked bat swung at the same speed, likely attributed to the greater mass of the solid ash bat. However, with the corked bat’s lighter weight, it stands to reason that a batter could swing the illegal bat faster. Would this tilt the performance in favor of the corked bat? Further experimentation was needed.

Simulation variant 1: Normal bat swung at 70mph

Simulation results from variant 1

Experiment 2: Comparison of Solid and Corked Bats with Higher Corked Bat Swing Speed

With the same pitch speed conditions, we wanted to account for this potential increase in swing speed with a corked bat. In our second experiment, we simulated the difference in force transfer and ball exit velocity between a solid wood bat swung at 70 miles per hour and a corked bat swung at 73 miles per hour (an educated guess of the increased rotational acceleration offered by the reduced bat mass). Setting the corked bat’s rotational speed at 52.17 rad/s, we tested this against the same solid wood bat swung at 44.6 rad/s from experiment 1.

Comparison of impact between normal bat, corked bat, and corked bat swung at faster speed

By applying increased bat rotational speed, we observed a significant increase in contact force and ball exit velocity. It seems that the loss of mass in the illegal bat was more than offset by the hitter’s ability to swing the bat at much greater force. The final baseball velocity of the corked bat was 11.5% greater than the solid ash bat.

The next step was to determine how much this difference in exit velocity would affect the hitter’s probability of hitting a home run.

The most common launch angle for home runs in the major leagues is between 25 and 30 degrees. We used a tool from Major League Baseball’s website to plot the probability of a home run between our solid wood bat and corked bat swing at the higher rate of speed. The Statcast Exit Velocity & Launch Angle Field Breakdown tool produced the following results.

Outcome probability of 92mph ball velocity at 27-degree launch angle Data Source: https://baseballsavant.mlb.com/statcast_field © MLB Advanced Media, LP. All rights reserved.

Outcome probability of 103mph ball velocity at 27-degree launch angle
Data Source: https://baseballsavant.mlb.com/statcast_field © MLB Advanced Media, LP. All rights reserved.

A ball hit with an exit velocity of 92 mph at a 27-degree launch angle would have a 3% chance of being a home run. A ball hit at 103 mph on the same launch angle would make the odds of a home run a staggering 81.5%. Additional factors such as backspin of the ball, weather conditions, stadium dimensions and elevation could also be factored in, but generally, the results of our simulation allow us to conclude that the added exit velocity produced by the added swing speed of the corked bat has a significant effect on home run probability.

Although Albert Belle was ultimately punished for his corked bat, it seems that the science backs up his use of this illegal technique. Cheaters never win in the end, but based on this simulation, you might be able to hit a couple extra dingers before you get caught.

All simulation for this blog was done using Altair RadiossTM, a leading structural analysis solver for highly non-linear problems under dynamic loading. Learn more about Altair Radioss here: https://altairhyperworks.com/product/RADIOSS

Thanks for continuing to follow our Digital Debunking series. We are having a ton of fun researching and writing these blogs. If you have an idea for what we could debunk next, leave a comment below. It could be one of our next features!

Sources:

The post Digital Debunking: Baseball’s Great Corked Bat Debate appeared first on The Altair Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This guest contribution on the Altair Blog is written by the CAEfatigue Team, a member of the Altair Partner Alliance.

If you are new to the Frequency Domain, it is understandable that you may think that this “stuff” is way different than the Time Domain.  This is especially true when we ask folks to do material damage calculations in the Frequency Domain.  You may hear that “this is a whole new way of thinking” …  or, is it?

During our onsite and online training sessions, we often see a look of total confusion in the eyes of our students when we start to talk about the Frequency Domain.  Many have some comforting, albeit, vague memories of the Fourier Series from past days at college, but many have been working in the Time Domain for so long that any thoughts about changing to Frequency Domain are very daunting.

However, are we really changing that much … especially when it comes to a fatigue damage calculation?  Below is an image we often use to convince students that there really is not that much difference.

The material fatigue calculation process, in the image below, has 6 steps regardless of if you start with a Time Signal or PSD.  You start with (1) a stress time signal or PSD then (2) you do some sort of cycle counting process to eventually generate (3) the stress range histogram.  From here, (4) you refer to the fatigue material properties, (5) use Miners Rule to generate cycles to failure and (6) present the damage / life results.

If you look at the images below, you will see that steps (3), (4), (5) and (6) are identical whether you are working in the Time Domain or Frequency Domain. So, we really need to only focus on steps (1) and (2).

STEP 1: Conversion of the Time Signal to a PSD

We like to call this “conditioning” of the time signal because you cannot simply convert a time signal to a PSD without properly conditioning (or correcting) the time signal to satisfy the 3 key assumptions that must be meet.

Most often we see issues with STATIONARY: which is the need for the signal to have the same statistical properties regardless of what time slice you look at within the time signal.

Below are multiple time signals belonging to the same Event that are non-stationary.  The time signals have several low intensity sections as well as sections that appear to have different frequency content.  The statistical properties across these time signals are different depending on the time slice you select to analyze.

Since we are interested in the damage that these signals will cause to a structure, we cannot simply convert the signals “as is” because the non-stationary sections will add time to the duration of the loading, which is not appropriate since the time of the loading should only reflect the parts of the signals that do the damage and not the low intensity section that cause little to no damage.

In this case, the Event should be broken into 3 separate (and shorter) Events that only reflect the parts of the time signals that cause significant damage (see below).  The remaining parts of the signals can be ignored as they cause little to no damage.

CAEfatigue Limited provides conversion / conditioning tools called TIME2PSD (manual) and CAEfatigue CONDITIONING – CFC (automatic) that do this work for our Users.  Below is an image of the first “new / shorter” Event 1 from above, that has been conditioned (second plot) and converted into PSDs (third plot).  These properly converted PSDs are then brought into a CFV fatigue analysis.

STEP 2: Cycle Counting the PSDs

We use the term “cycle counting” just to make new students a little more comfortable.  In fact, we really do not cycle count but follow a new process that eventually produces a stress range histogram similar to what is produced when you cycle count a time signal.

The process starts with calculating the “spectral moments”.  These spectral moments are then used in a “fatigue modeler” to generate a probability density function (pdf) that gives us the distribution of the stress cycles across the stress range.  We use this pdf to distribute the total number of cycles calculated from the Response PSD to calculate a histogram of stress cycles.

CFV provides multiple methods to calculate the pdf of stress cycles in the frequency domain.  However, the CFV software currently uses the DIRLIK approach as the default method.  This method works well for all forms of random input (both wide band and narrow band PSDs) and will also work well when random input PSDs are mixed with deterministic loading (i.e. sine on random analysis).

The formula to calculate the histogram of stress ranges, n(S), is given below.  This calculation is done at every element / node location throughout the model using the appropriate Response PSDs (and/or deterministic loading) and summed together to generate the histogram of stress ranges.  This data then allows the calculation of fatigue damage at every element / node location following the remaining steps (4), (5) and (6) as talked about at the beginning of the blog.

Where;

n(S) is the TOTAL number of rainflow cycle or perhaps a better term, stress range cycles for a given stress value.  When plotted for all stress values, this produces a stress range histogram.

E[P] is number of stress cycles per second calculated from the Response PSD.  This is also called the Expected Number of Peaks of the Response PSD.

T is the duration of the Event loading in seconds. By default, CFV uses 1 second.

P(S) is the probability density function (pdf) of stress cycle ranges (peak to peak). By default, CFV uses the Dirlik Method to calculate this function, which tells us how to distribute the stress cycles from E[P].

To calculate E[P] and p(S) we need to first calculate the spectral moments.

f is the frequency of interest

G(f) is the height of the one-sided Response PSD at the frequency of interest

Once we calculate the moments m0, m1, m2 and m4 we can calculate the expected peak rate (i.e. total number of cycles/second) using the formula

and calculate the stress cycle pdf, p(S), using the DIRLIK formula below.

Where the probability density function p(S) is solely a function of moments m0, m1, m2 and m4.

If we (again) use the comforting term “rainflow cycle”, below we see the histogram of rainflow cycles count (ns) versus a stress bin number.  With a little added manipulation, this can be converted to stress range histogram.

We have now taken care of steps (1), (2) and (3), and can manage the rest of the damage calculation in the same manner as we do in the Time Domain.

CONCLUSIONS:

When calculating the fatigue damage / life in the Frequency Domain, we can fall back on many of the things we already know about the process from the Time Domain. Our only challenge is to:

  1. Properly convert the Time Signals to PSDs by conditioning the time signals first, prior to the conversion.
  2. With a properly converted PSD, use Spectral Moments and a Fatigue Modeler (like Dirlik) to calculate the pdf and stress range histogram.

Once these 2 steps are done, we can calculate damage / life in the same manner as we would do in the Time Domain.

So, is there a big difference when calculating material fatigue damage between Time Domain and Frequency Domain?  Depends on who and when you ask the question.  To anyone new to the Frequency Domain, the answer will be a resounding “YES!”, however, if you ask that same person after they have had the appropriate training and some experience, perhaps the answer will be “actually, not so much!”.

The post Frequency Domain Fatigue Damage Calculation Process: Is it Really that Different? appeared first on The Altair Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

As automotive OEMs push to reduce design cycles and reuse their platforms for multiple programs, the importance of analysis feedback in early phases of design cycle has become invaluable. The easy-to-learn mesh morphing features of HyperWorks X will bring efficiency to teams working on simulation models early in product development. These workflows enable concept level changes to be made directly on an existing FEA model, bypassing CAD generation and accelerating decision making.

The HyperWorks X Morphing workflows enable an existing product or system to be modified quickly. This allows you to explore more ideas sooner using a current model without the slow, costly and labor-intensive traditional geometry creation and meshing tasks. Multiple variants can be generated quickly using the manipulation tools, while the rebuild algorithm maintains the desired mesh quality.

The design iterations like changing the wheelbase, roof height of Body in White along with modifying the doors to match the intended design change, can be easily done using morphing workflows in HyperWorks X. In addition, creating a new design content, such as cross-members, has never been so easy. The users can quickly create cross-sections on the fly or import existing sections from a design library to create a concept. The new meshing workflow ensures that the newly created parts are meshed appropriately.

All the latest and trusted HyperMesh functionality is also available within HyperWorks X. HyperMesh 2019 creates the best meshes yet for more accurate simulations and enables easier modeling of complex castings and injection molded parts. The model build and assembly tools enable CAE to keep pace with design through rapid part and assembly swapping with management of multiple configurations.

No matter which industry you work in I’m excited for you to try HyperWorks X and experience the pre-processing power and efficiency for yourself.

Check out the HyperWorks 2019 Webinar Series to learn more about all the latest products, features, and updates

The post Faster Concept Modeling: Bringing Efficiency to Early Product Development appeared first on The Altair Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In “Democratizing Smart Buildings with IoT,” we talked about how the advent of powerful and cost-effective Internet of Things solutions has opened up the realm of connected technology to small- and mid-sized buildings — a benefit that was previously limited to large commercial sites.

Smaller spaces and smaller budgets are no longer a barrier to implementing a smart building solution, with a new generation of IoT devices such as sensors, thermostats, and different types of lighting control solutions driving the cost of data and device management down to only a few cents per square foot. Now owners and managers of smaller buildings can boost efficiency and get insight into building system performance.

Now we’ll share a couple of scenarios in which we implemented Altair SmartWorks solutions to turn smaller facilities into smart buildings.

Connecting a Special-Needs School

We worked with a special-needs school in California to help the facility manager get insight into after-hours energy usage and optimize building HVAC performance. They were on a tight budget but wanted to reduce operational costs and ultimately create a better learning environment for their students.

Before getting started, they didn’t know how their assets were performing or if those assets were running efficiently. They had no insight into off-hours energy consumption, which they believed was contributing to higher-than-average energy bills.

The solution was to implement the cost-effective Altair SmartEdge IoT platform and install several multichannel energy meters, a submeter, and connected, programmable thermostats that enabled to school to pinpoint higher-than-expected consumption on non-working days and after hours. They then assessed factors that could be causing the problem.

As a result, the school was able to reduce off-hours energy consumption, optimize equipment usage, and create comfortable conditions for everyone in the school.

With its IoT system in place, the school now uses pre-set thermostat schedules to regulate the classroom environment and data analysis to make smart decisions. Off-hours energy consumption is down, and the building’s HVAC equipment is operating more efficiently than ever. Most importantly, students are in a comfortable environment and staff members are free to concentrate on their primary mission — education.

Making Commercial Real Estate Smart

Another project we undertook was for a Canadian commercial real estate management company. A community college was one of its larger tenants, and the school operated extended hours to serve its class schedule. All tenants were billed utility costs on a per-square-foot basis, as is typical with small commercial buildings, and as a result the smaller tenants with standard business hours were paying for a portion of the college’s energy usage.

The lease was due to be renegotiated and the building owner wanted to submeter the school so they could understand their actual energy use and negotiate a different cost model. There had also been two leaks that led to flooding in the previous 18 months, which resulted in around $175,000 in damage and increased insurance premiums. They wanted to prevent that from happening again.

We used SmartEdge and low-cost sensors to accomplish these goals, including installing 42 wireless leak detection sensors in bathrooms and near water coolers, a sensor in the parking garage, and a valve on a main water line that could be shut off if a sensor triggered as part of the rules we created in the platform. We also created setbacks on the thermostats. This implementation resulted in fair and balanced billing for tenant energy consumption, in addition to over 22% in energy savings.

With SmartEdge and connected technology, the real estate management company is seeing a higher ROI than a traditional building management system could offer — plus more applications, more data, and more flexibility.

Now with an optimized HVAC workload, fair tenant billing, less risk of flooding, and the potential for reduced insurance premiums, this connected building is set to efficiently support tenants well into the future.

IoT for Real-time Building Control

With IoT devices installed, you have a smart and connected building with real-time control of thermostats, lights, and discretionary loads to optimize your building’s performance and maximize your investment. Having sophisticated sensors, alarms, and thresholds for out-of-bounds conditions means you can get ahead of problems before they become critical.

Learn more about these scenarios and about connecting small and medium-sized buildings in our on-demand webinar

The post Real-World Scenarios: Democratizing Smart Buildings with IoT appeared first on The Altair Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Until recently, smart buildings have been limited to large commercial sites — but with the advent of powerful and cost-effective Internet of Things (IoT) solutions, owners and managers of small and mid-sized buildings can take advantage of connected technology.

Challenges for Small and Medium-Sized Organizations

Smaller buildings are frequently heterogenous, with a myriad of disparate systems that don’t interact with each other in an intelligent way. The equipment has different makes and models. There’s no common protocol between systems, which is challenging in terms of interoperability.

Traditionally, rooftop HVAC units are controlled individually using a thermostat to manage zoned heating and cooling. Lighting is generally on a separate system and in zones controlled by individual switches or circuits. The same can be said for other electrical systems in the building including the utility meter, submeters, and other operational equipment for each individual business. Small- and medium-sized business owners and operators have smaller budgets than big corporations, so many service providers focus on buildings with more than 50,000 to 100,000 square feet.

Without connected devices, a business has limited insight into building system performance – typically only utility bills to help them understand energy consumption data. This can mean assets end up working more than they need to, ultimately reducing asset life and increasing the cost of maintenance.

Most IoT-based smart building systems on today’s market use proprietary solutions or are siloed, meaning users are captive to a single technology stack. That can make it difficult to incorporate additional devices for control at the edge, including those used for energy monitoring and lighting control.

The cost to install a complex building automation system runs, on average, $2 to $3 per square foot. These systems are great for large commercial buildings, but at a smaller scale the ROI doesn’t work. $100,000 to $150,000 for a 50,000 square foot facility makes it a difficult expense to justify. In addition, these systems require specialized expertise to operate and can be complicated for building owners and operators. If the workflows are difficult to navigate, the system won’t be used to its highest potential, reducing efficiency and increasing long-term cost.

A new generation of IoT devices such as sensors, thermostats, and different types of lighting control solutions are driving the cost of data and device management down to only a few cents per square foot, allowing small- and medium-sized building owners and operators to participate in this space. This has enabled system integrators and service providers to adopt this type of technology as part of their solutions for this previously underserved segment.

Selecting a Smart Building Solution

When evaluating smart building solutions, key variables include:

  • Platforms that leverage open, multi-protocol solutions that don’t rely on a single technology to deliver value
  • Incorporation of logical workflows that are simple to use
  • The ability to scale across use cases as the industry begins to harness IoT technology more broadly

For energy managers, facility managers, and service providers who want to adopt IoT-based technology in their buildings, it’s critical to have a flexible platform on which to scale and deliver value. This includes incorporating productive analytics or enabling the choice to use a third-party analytics platform.

One of the advantages of Altair’s Smart Building solution is that it’s built on Altair SmartWorks, an IoT platform for enabling applications across a wide variety of industries and verticals. SmartWorks has multiple components that help solve problems common to many IoT applications. It has an edge component called Altair SmartEdge, which helps connect to both wired and wireless networks on the ground, aggregate and normalize the data, and send it to the cloud. It has a cloud component and a drag-and-drop data visualization layer, which helps drive outcomes from the data that comes from the cloud. We’ve adopted this architecture for smart buildings.

SmartWorks is an open architecture, so each component in the structure is swappable with other components. While they work well together, they don’t necessarily need each other. They’re also easy to use. A core tenet of every development for these products is that it needs to be easy to implement and use. Because Altair’s Smart Building solution is built on SmartWorks, it means that as we add features, they become part of the solution and part of all the smart buildings that use it. Having a solid backbone like this allows flexibility and long-term scalability.

Benefits of Enabling an IoT Solution

Not only is there a tremendous energy savings opportunity that comes with the ability to identify energy conservation measures through real-time data analysis, but users can set thresholds and alerts on the data from specific pieces of equipment. Doing that allows them to react quickly to faults and triage issues immediately. If you’re an energy service provider or mechanical contractor, the ability to do proactive monitoring means you can optimize your service visits. An IoT platform provides insight into the status and operation of equipment and devices, and it allows building managers to get ahead of faults and breaks — without waiting to be notified by tenants.

It’s an opportunity to increase customer satisfaction, be a trusted advisor, monitor asset performance over time, and identify issues early, potentially extending the life of your assets.

In an upcoming article we’ll share some specific use cases — setting up smart building solutions for commercial real estate and for a special-needs school.

Learn more about these scenarios and about connecting small and medium-sized buildings in our on-demand webinar available now.

The post Democratizing Smart Buildings with IoT appeared first on The Altair Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Before a product ever goes to market, companies must understand how their products will behave during and beyond their expected use cycle. Building a brand perception of product quality relies on consistently reliable products. Additionally, high development costs and expensive warranty claims can be mitigated by integrating fatigue simulation into the product development process. There are many tools on the market to simulate the durability and fatigue life of products, but what sets some solutions apart?

For me, there are three key factors in establishing a successful fatigue simulation process.

  1. The software must be easy to use. Not every engineer needs to run fatigue analysis every day, so a steep learning curve is an impediment to productivity. Intuitive workflows are a must to produce trusted simulation results.
  2. Fatigue simulation must fit into your existing development process. Upending your development process to incorporate a new tool can have hugely negative effects on your productivity. The right solution should seamlessly fit into your simulation toolchain.
  3. The software you choose must be robust. It needs to offer a broad range of loading conditions in order to accurately predict how your product will perform under extreme real-world conditions.

Altair has made these factors the bedrock of the newly released fatigue analysis solution, Altair HyperLifeTM. Available in the recently released Altair HyperWorksTM 2019, HyperLife enables customers to quickly understand potential durability issues through an easy-to-learn solution for fatigue life under static and transient loading.

Since a wide variety of users need access to fatigue analysis, from test engineers, to design engineers as well as traditional CAE users, ease of use is an absolute must. Guided solver-neutral workflows walk users through the simulation process for a wide range of industrial applications. HyperLife also works with all major finite element analysis (FEA) tools, reading results files and recognizing geometry features including welds, which means it can be easily adopted no matter what other software tools you currently use in your product development.

Simulation is all about identifying and correcting potential issues early in the development process. Where physical testing can often take months, fatigue simulation can be done in a matter of hours, allowing you to evaluate damage and cycles to failure, faster and earlier, to ensure durability targets are hit without time-consuming and expensive redesigns.

The HyperLife workflow is familiar to anybody with durability testing or simulation experience. Choose a fatigue approach (Stress Life, Strain Life, Factor of Safety or Weld Fatigue), then assign materials to your parts from the comprehensive embedded database. Select cyclic load histories to create durability events using the mapping tool and then run the evaluation. HyperLife reports back damage, number of cycles to failure and other essential information.

Fatigue simulation enables you to differentiate your products through quality and beat competitors to market with quicker engineering decisions. I recommend checking out HyperLife!

Join us for a free webinar to learn more about HyperLife

The post Durability in Hours Instead of Months: Fatigue Simulation for All Engineers appeared first on The Altair Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
This guest contribution on the Altair Blog is written by Samir Khan, Maple Product Manager at Maplesoft, a member of the Altair Partner Alliance.

Digital signal processing techniques are vital to many technologies and are used widely by a broad swathe of engineers. A biomedical engineer may want to smooth pulse data, an automotive engineer may want to analyze an engine sound to identify characteristic frequencies, or an electrical engineer may want to model ultrasound transmission through a band gap medium.

While traditional signal processing tools help you do the number crunching, they don’t maximize the overall value of your work. The value of your work is the sum of several tangible and intangible factors, and includes:

  • calculations – the core of your analyses
  • documentation – the thought processes and assumptions behind the calculations
  • deployment – the methods with which your analyses can be shared with clients and colleagues
  • extensibility – the degree to which your analyses can be extended for different or more sophisticated problems
  • auditability – the ease with which your work can be verified by you or someone else, and
  • reliability – the degree to which your calculations can be trusted.

Most signal processing tools only address the first requirement – the calculations. Maple, however, satisfies all the requirements; this, in effect, helps you manage and structure your technical work.

Our users often report what they’re doing with Maple. Recently,

  • a mechanical engineer identified the location of a fault in a gearbox by applying cepstrum analysis
  • an audio engineer fine-tuned the cosine transform-based compression of a voice signal, dramatically reducing storage space while maintaining legibility
  • and, using Lomb-Scargle analyses, an astronomer inferred the existence of an exoplanent by analysing the wobble of stars.

In all cases, users documented their work using rich text and in-line plots, and deployed their analyses using the run-time environment.

Figure 1. Amplitude Spectrum of a Violin Note

Signal preprocessing and generation Maple 2019 expands the range of signal processing tools, and includes new tools for cepstrum analysis and the spectral analysis of irregularly spaced data. This is in addition to Maple’s extensive range of signal processing tools for analyzing and manipulating data in the frequency and time domains. The current package includes tools for:

  • Signal Statistics and analysis
  • Filtering and windowing
  • FFTs and DCTs, wavelets, cepstrum, and Lomb-Scargle analysis
  • Periodograms, spectrograms, Bode plots, Nyquist plots and more
  • Data import and export
  • Audio and image processing

Figure 2. Audio Compressed with a Discrete Cosine Transform

In addition to its signal processing capabilities, Maple also offers over 5000 functions covering virtually every area of mathematics, including matrix computation, linear algebra, differential equations, data analysis, optimization and statistics. Maple provides the ability to perform symbolic, numeric and hybrid computations allowing for flexible problem solving and provides solutions beyond the reach of any other software system.

From autonomous vehicles to biometric recognition, the applications of signal processing in today’s technologies are endless. Maple is one of the best examples of an interactive math system and is specifically designed for describing, investigating, visualizing, and solving mathematical problems, including those that utilize signal processing.

More information on the Signal Processing Package can be found in the webinar, Discover the Signal Processing Package in Maple.

The post Maximizing the Value of Your Work with Digital Signal Processing appeared first on The Altair Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Each year, through the generous support of its employees and vendors, Altair commits to sponsoring the National Multiple Sclerosis Society-Michigan Chapter, in several deep and meaningful ways. We are “driven” by the importance of the organization’s mission of “wanting to do something about MS now — to move toward a world free of multiple sclerosis,” according to their website. Further, we are a passionate and motivated “Team” that formed over the years, to preserve the commitment.

Team Altair has the privilege to host fundraisers, receive corporate-matching donations up to $250 per year, paid (one 8-hour day) volunteer participation, and seek corporate sponsorship funds for the various annual events put on by the MS Society, as well as other Altair-qualified charities.

Earlier this year, on May 10, Altairian fellowship and collaboration raised $2,400 for the MS Society in a Baked Goods & Flower Sale Fundraiser. It’s one of our largest fundraisers for this campaign. Many of the flower baskets, herbs and plants come from Telly’s Greenhouse & Garden Center in Troy.

Team Altair then held their first two events in Holland, Michigan on June 1-2. The BIKE MS & WALK MS: Great Lakes West Michigan Breakaway started and finished at Hope College. We had the pleasure of promoting Altair’s music app, WEYV with stickers and sunglasses for everyone who participated in both events. And, for the first time in several years, our team was pleased to show off new jerseys and jackets!

BIKE MS was a two-day event. Three ambitious rider battled storms all day long on the first day and finished riding 75 miles. This particular group does not let obstacles stand in their way and can persevere through difficult times and difficult weather. Even though part of the route was shut down because of the rough Michigan weather, they completed the ride and crossed the finish line together, drenched and cold; and most importantly, safe.

WALK MS began shortly after the riders took off. Again, because of the weather, they cut the route down and we were only able to walk one mile before it started storming on us. It was a fast walk around the Hope College campus! Then, we helped cheer other riders in until our bike group came in.

On Day Two of BIKE MS, we were joined by three more Altairians and their spouses. It was a perfect, late-spring, sunny day for a bike ride along the beaches of Lake Michigan. Two riders rode 30 miles, one rode 50 miles, and the same threesome from yesterday, rode 100 miles today. Everyone who has ever rode in Holland for BIKE MS, has said this is their favorite ride and route.

Watch here for more news on Team Altair/MS Society events, in the near future!

The post BIKE MS and WALK MS: Team Altair Continues Support for the National MS Society appeared first on The Altair Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

No matter if you work at a small company or a multinational corporation, Computer-Aided Engineering (CAE) technology can enable you to identify potential failures, improve performance, and find opportunities for lightweighting and cost savings. Often design and engineering teams rely on slow iterative design cycles, with each cycle causing delays in the product development process, losing both money and competitive advantage.

Altair HyperWorks software is used at the world’s most innovative companies for the design and optimization of their products. It enables engineers to make better decisions, optimize designs and reduce the costs of physical testing.

Sounds great, but how do you go about training designers and design engineers to effectively use CAE software? And how do you perform the complex analysis and multiple physics simulations required to model product performance under real-world conditions?

Altair SimLab is an intuitive and easy to learn workflow platform that enables you to simulate structural and multiphysics problems without the complexity usually associated with finite element analysis (FEA) tools. This opens detailed structural analysis to all experience levels and brings Multiphysics to engineers already familiar with FEA.

SimLab is an automated and easy-to-use multiphysics simulation environment with bi-directional connections to parametric CAD systems for high-fidelity analysis of thick-walled parts, complex assemblies and fluid flow systems. Ideally suited for design engineers and simulation experts alike, SimLab automates every step of the simulation process from solid meshing and analysis to multiphysics co-simulation and interactive visualization.

Faster design exploration and validation is possible with syncing to popular CAD tools including CATIA, Pro/E, Siemens NX and SolidWorks. The workflows include the simulation of statics, dynamics, heat transfer, fluid flow, electromagnetics analysis, fluid-structure interaction, and electromagnetic-thermal coupling.

Want to see how SimLab can help your company on the path to simulation-driven design? Check out the upcoming webinar to see SimLab in action!

The post Better Workflows for All Engineers: From Structural Analysis to Multiphysics appeared first on The Altair Blog.

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview