Erwin provides the most comprehensive data management and governance solutions to automate and accelerate the transformation of data into accurate and actionable insights. The erwin EDGE platform combines data governance, enterprise architecture, business process, data modeling and data mapping.
Organization’s cannot hope to make the most out of a data-driven strategy, without at least some degree of metadata-driven automation.
The volume and variety of data has snowballed, and so has its velocity. As such, traditional – and mostly manual – processes associated with data management and data governance have broken down. They are time-consuming and prone to human error, making compliance, innovation and transformation initiatives more complicated, which is less than ideal in the information age.
So it’s safe to say that organizations can’t reap the rewards of their data without automation.
Data scientists and other data professionals can spend up to 80 percent of their time bogged down trying to understand source data or addressing errors and inconsistencies.
That’s time needed and better used for data analysis.
By implementing metadata-driven automation, organizations across industry can unleash the talents of their highly skilled, well paid data pros to focus on finding the goods: actionable insights that will fuel the business.
Metadata-Driven Automation in the BFSI Industry
The banking, financial services and insurance industry typically deals with higher data velocity and tighter regulations than most. This bureaucracy is rife with data management bottlenecks.
These bottlenecks are only made worse when organizations attempt to get by with systems and tools that are not purpose-built.
For example, manually managing data mappings for the enterprise data warehouse via MS Excel spreadsheets had become cumbersome and unsustainable for one BSFI company.
Metadata-Driven Automation in the Pharmaceutical Industry
Despite its shortcomings, the Excel spreadsheet method for managing data mappings is common within many industries.
But with the amount of data organizations need to process in today’s business climate, this manual approach makes change management and determining end-to-end lineage a significant and time-consuming challenge.
One global pharmaceutical giant headquartered in the United States experienced such issues until it adopted metadata-driven automation. Then the pharma company was able to scan in all source and target system metadata and maintain it within a single repository. Users now view end-to-end data lineage from the source layer to the reporting layer within seconds.
Metadata-Driven Automation in the Insurance Industry
Insurance is another industry that has to cope with high data velocity and stringent data regulations. Plus many organizations in this sector find that they’ve outgrown their systems.
For example, an insurance company using a CDMA product to centralize data mappings is probably missing certain critical features, such as versioning, impact analysis and lineage, which adds to costs, times to market and errors.
By adopting metadata-driven automation, organizations can standardize the pre-ETL data mapping process and better manage data integration through the change and release process. As a result, both internal data mapping and cross functional teams now have easy and fast web-based access to data mappings and valuable information like impact analysis and lineage.
Another common issue cited by organizations using manual data mapping is ballooning complexity and subsequent confusion.
Any organization expanding its data-driven focus without sufficiently maturing data management initiative(s) will experience this at some point.
One of the world’s largest humanitarian organizations, with millions of members and volunteers operating all over the world, was confronted with this exact issue.
It recognized the need for a solution to standardize the pre-ETL data mapping process to make data integration more efficient and cost-effective.
With metadata-driven automation, the organization would be able to scan and store metadata and data dictionaries in a central repository, as well as manage the business definitions and data dictionary for legacy systems contributing data to the enterprise data warehouse.
Experts predicted an uptick in GDPR enforcement in 2019, and Google’s recent record GDPR fine has brought that to fruition. France’s data privacy enforcement agency hit the tech giant with a $57 million penalty – more than 80 times the steepest ICO fine.
If it can happen to Google, no organization is safe. Many in fact still lag in the GDPR compliance department. Cisco’s 2019 Data Privacy Benchmark Study reveals that only 59 percent of organizations are meeting “all or most” of GDPR’s requirements.
So many more GDPR violations are likely to come to light. And even organizations that are currently compliant can’t afford to let their data governance standards slip.
Data Governance for GDPR
Google’s record GDPR fine makes the rationale for better data governance clear enough. However, the Cisco report offers even more insight into the value of achieving and maintaining compliance.
Organizations with GDPR-compliant security measures are not only less likely to suffer a breach (74 percent vs. 89 percent), but the breaches suffered are less costly too, with fewer records affected.
However, applying such GDPR-compliant provisions can’t be done on a whim; organizations must expand their data governance practices to include compliance.
A robust data governance initiative provides a comprehensive picture of an organization’s systems and the units of data contained or used within them. This understanding encompasses not only the original instance of a data unit but also its lineage and how it has been handled and processed across an organization’s ecosystem.
With this information, organizations can apply the relevant degrees of security where necessary, ensuring expansive and efficient protection from external (i.e., breaches) and internal (i.e., mismanaged permissions) data security threats.
Although data security cannot be wholly guaranteed, these measures can help identify and contain breaches to minimize the fallout.
Looking at Google’s Record GDPR Fine as An Opportunity
The tertiary benefits of GDPR compliance include greater agility and innovation and better data discovery and management. So arguably, the “tertiary” benefits of data governance should take center stage.
While once exploited by such innovators as Amazon and Netflix, data optimization and governance is now on everyone’s radar.
So organization’s need another competitive differentiator.
An enterprise data governance experience (EDGE) provides just that.
This approach unifies data management and data governance, ensuring that the data landscape, policies, procedures and metrics stem from a central source of truth so data can be trusted at any point throughout its enterprise journey.
With an EDGE, the Any2 (any data from anywhere) data management philosophy applies – whether structured or unstructured, in the cloud or on premise. An organization’s data preparation (data mapping), enterprise modeling (business, enterprise and data) and data governance practices all draw from a single metadata repository.
In fact, metadata from a multitude of enterprise systems can be harvested and cataloged automatically. And with intelligent data discovery, sensitive data can be tagged and governed automatically as well – think GDPR as well as HIPAA, BCBS and CCPA.
Organizations without an EDGE can still achieve regulatory compliance, but data silos and the associated bottlenecks are unavoidable without integration and automation – not to mention longer timeframes and higher costs.
To get an “edge” on your competition, consider the erwin EDGE platform for greater control over and value from your data assets.
Organizations are responsible for governing more data than ever before, making a strong automation framework a necessity. But what exactly is an automation framework and why does it matter?
In most companies, an incredible amount of data flows from multiple sources in a variety of formats and is constantly being moved and federated across a changing system landscape.
Often these enterprises are heavily regulated, so they need a well-defined data integration model that helps avoid data discrepancies and removes barriers to enterprise business intelligence and other meaningful use.
IT teams need the ability to smoothly generate hundreds of mappings and ETL jobs. They need their data mappings to fall under governance and audit controls, with instant access to dynamic impact analysis and lineage.
With an automation framework, data professionals can meet these needs at a fraction of the cost of the traditional manual way.
In data governance terms, an automation framework refers to a metadata-driven universal code generator that works hand in hand with enterprise data mapping for:
Pre-ETL enterprise data mapping
Governing and versioning source-to-target mappings throughout the lifecycle
Data lineage, impact analysis and business rules repositories
Automated code generation
Such automation enables organizations to bypass bottlenecks, including human error and the time required to complete these tasks manually.
In fact, being able to rely on automated and repeatable processes can result in up to 50 percent in design savings, up to 70 percent conversion savings and up to 70 percent acceleration in total project delivery.
So without further ado, here are the five key benefits of an automation framework for data governance.
Benefits of an Automation Framework for Data Governance
Creates simplicity, reliability, consistency and customization for the integrated development environment.
Code automation templates (CATs) can be created – for virtually any process and any tech platform – using the SDK scripting language or the solution’s published libraries to completely automate common, manual data integration tasks.
CATs are designed and developed by senior automation experts to ensure they are compliant with industry or corporate standards as well as with an organization’s best practice and design standards.
The 100-percent metadata-driven approach is critical to creating reliable and consistent CATs.
It is possible to scan, pull in and configure metadata sources and targets using standard or custom adapters and connectors for databases, ERP, cloud environments, files, data modeling, BI reports and Big Data to document data catalogs, data mappings, ETL (XML code) and even SQL procedures of any type.
Provides blueprints anyone in the organization can use.
Stage DDL from source metadata for the target DBMS; profile and test SQL for test automation of data integration projects; generate source-to-target mappings and ETL jobs for leading ETL tools, among other capabilities.
It also can populate and maintain Big Data sets by generating PIG, Scoop, MapReduce, Spark, Python scripts and more.
Incorporates data governance into the system development process.
An organization can achieve a more comprehensive and sustainable data governance initiative than it ever could with a homegrown solution.
An automation framework’s ability to automatically create, version, manage and document source-to-target mappings greatly matters both to data governance maturity and a shorter-time-to-value.
This eliminates duplication that occurs when project teams are siloed, as well as prevents the loss of knowledge capital due to employee attrition.
Another value capability is coordination between data governance and SDLC, including automated metadata harvesting and cataloging from a wide array of sources for real-time metadata synchronization with core data governance capabilities and artifacts.
Proves the value of data lineage and impact analysis for governance and risk assessment.
Automated reverse-engineering of ETL code into natural language enables a more intuitive lineage view for data governance.
With end-to-end lineage, it is possible to view data movement from source to stage, stage to EDW, and on to a federation of marts and reporting structures, providing a comprehensive and detailed view of data in motion.
The process includes leveraging existing mapping documentation and auto-documented mappings to quickly render graphical source-to-target lineage views including transformation logic that can be shared across the enterprise.
Similarly, impact analysis – which involves data mapping and lineage across tables, columns, systems, business rules, projects, mappings and ETL processes – provides insight into potential data risks and enables fast and thorough remediation when needed.
Impact analysis across the organization while meeting regulatory compliance with industry regulators requires detailed data mapping and lineage.
Intelligent automation delivers enhanced capability, increased efficiency and effective collaboration to every stakeholder in the data value chain: data stewards, architects, scientists, analysts; business intelligence developers, IT professionals and business consumers.
It makes it easier for them to handle jobs such as data warehousing by leveraging source-to-target mapping and ETL code generation and job standardization.
It’s easier to map, move and test data for regular maintenance of existing structures, movement from legacy systems to new systems during a merger or acquisition, or a modernization effort.
erwin’s Approach to Automation for Data Governance: The erwin Automation Framework
Mature and sustainable data governance requires collaboration from both IT and the business, backed by a technology platform that accelerates the time to data intelligence.
Part of the erwin EDGE portfolio for an “enterprise data governance experience,” the erwin Automation Framework transforms enterprise data into accurate and actionable insights by connecting all the pieces of the data management and data governance lifecycle.
As with all erwin solutions, it embraces any data from anywhere (Any2) with automation for relational, unstructured, on-premise and cloud-based data assets and data movement specifications harvested and coupled with CATs.
If your organization would like to realize all the benefits explained above – and gain an “edge” in how it approaches data governance, you can start by joining one of our weekly demos for erwin Mapping Manager.
A mature and sustainable data governance initiative must include data integration.
This often requires reconciling two groups of individuals within the organization: 1) those who care about governance and the meaningful use of data and 2) those who care about getting and transforming the data from source to target for actionable insights.
Both ends of the data value chain are covered when governance is coupled programmatically with IT’s integration practices.
The tools and processes for this should automatically generate “pre-ETL” source-to-target mapping to minimize human errors that can occur while manually compiling and interpreting a multitude of Excel-based data mappings that exist across the organization.
In addition to reducing errors and improving data quality, the efficiencies gained through automation, including minimizing rework, can help cut system development lifecycle costs in half.
In fact, being able to rely on automated and repeatable processes can result in up to 50 percent in design savings, up to 70 percent conversion savings, and up to 70 percent acceleration in total project delivery.
Data Governance and the System Development Lifecycle
Boosting data governance maturity starts with a central metadata repository (data dictionary) for version-controlling metadata imported from a broad array of file and database types to inform data mappings. It can be used to automatically generate governed design mappings and code in the design phase of the system development lifecycle.
The right toolset – one that supports a unifying and underlying metadata model – will be a design and code-generation platform that introduces efficiency, visibility and governance principles while reducing the opportunity for human error.
Automatically generating ETL/ELT jobs for leading ETL tools based on best design practices accommodates those principles; it functions according to approved corporate and industry standards.
Automatically importing mappings from developers’ Excel sheets, flat files, access and ETL tools into a comprehensive mappings inventory, complete with automatically generated and meaningful documentation of the mappings, is a powerful way to support governance while providing real insight into data movement – for lineage and impact analysis – without interrupting system developers’ normal work methods.
GDPR compliance, for example, requires a business to discover source-to-target mappings with all accompanying transactions, such as what business rules in the repository are applied to it, to comply with audits.
When data movement has been tracked and version-controlled, it’s possible to conduct data archeology – that is, reverse-engineering code from existing XML within the ETL layer – to uncover what has happened in the past and incorporating it into a mapping manager for fast and accurate recovery.
This is one example of how to meet data governance demands with more agility and accuracy at high speed.
Faster Time-to-Value with the erwin Automation Framework
The erwin Automation Framework is a metadata-driven universal code generator that works hand in hand with erwin Mapping Manager(MM) for:
Pre-ETL enterprise data mapping
Governing and versioning source-to-target mappings throughout the lifecycle
Data lineage, impact analysis and business rules repositories
So what’s on the horizon for data governance in the year ahead? We’re making the following data governance predictions for 2019:
Top 10 Data Governance Predictions for 2019
1. GDPR-esque regulation for the United States:
GDPR has set the bar and will become the de facto standard across geographies. Look at California as an example with California Consumer Privacy Act (CCPA) going into effect in 2020. Even big technology companies like Apple, Google, Amazon and Twitter are encouraging more regulations in part because they realize that companies that don’t put data privacy at the forefront will feel the wrath from both the government and the consumer.
2. GDPR fines are coming and they will be massive:
Perhaps one of the safest data governance predictions for 2019 is the coming clamp down on GDPR enforcement. The regulations weren’t brought in for show and so it’s likely the fine-free streak for GDPR will be ending … and soon. The headlines will resemble data breaches or hospitals with Health Information Portability Privacy Act (HIPAA) violations in the U.S. healthcare sector. Lots of companies will have an “oh crap” moment and realize they have a lot more to do to get their compliance house in order.
3. Data policies as a consumer buying criteria:
The threat of “data trauma” will continue to drive visibility for enterprise data in the C-suite. How they respond will be the key to their long-term success in transforming data into a true enterprise asset. We will start to see a clear delineation between organizations that maintain a reactive and defensive stance (pain avoidance) versus those that leverage this negative driver as an impetus to increase overall data visibility and fluency across the enterprise with a focus on opportunity enablement. The latter will drive the emergence of true data-driven entities versus those that continue to try to plug the holes in the boat.
4. CDOs will rise, better defined role within the organization:
We will see the chief data officer (CDO) role elevated from being a lieutenant of the CIO to taking a proper seat at the table beside the CIO, CMO and CFO. This will give them the juice needed to create a sustainable vision and roadmap for data. So far, there’s been a profound lack of consensus on the nature of the role and responsibilities, mandate and background that qualifies a CDO. As data becomes increasingly more vital to an organization’s success from a compliance and business perspective, the role of the CDO will become more defined.
5. Data operations (DataOps) gains traction/will be fully optimized:
Much like how DevOps has taken hold over the past decade, 2019 will see a similar push for DataOps. Data is no longer just an IT issue. As organizations become data-driven and awash in an overwhelming amount of data from multiple data sources (AI, IOT, ML, etc.), organizations will need to get a better handle on data quality and focus on data management processes and practices. DataOps will enable organizations to better democratize their data and ensure that all business stakeholders work together to deliver quality, data-driven insights.
6. Business process will move from back office to center stage:
Business process management will make its way out of the back office and emerge as a key component to digital transformation. The ability for an organization to model, build and test automated business processes is a gamechanger. Enterprises can clearly define, map and analyze workflows and build models to drive process improvement as well as identify business practices susceptible to the greatest security, compliance or other risks and where controls are most needed to mitigate exposures.
7. Turning bad AI/ML data good:
Artificial Intelligence (AI) and Machine Learning (ML) are consumers of data. The risk of training AI and ML applications with bad data will initially drive the need for data governance to properly govern the training data sets. Once trained, the data they produce should be well defined, consistent and of high quality. The data needs to be continuously governed for assurance purposes.
8. Managing data from going over the edge:
Edge computing will continue to take hold. And while speed of data is driving its adoption, organizations will also need to view, manage and secure this data and bring it into an automated pipeline. The internet of things (IoT) is all about new data sources (device data) that often have opaque data structures. This data is often integrated and aggregated with other enterprise data sources and needs to be governed like any other data. The challenge is documenting all the different device management information bases (MIBS) and mapping them into the data lake or integration hub.
9. Organizations that don’t have good data harvesting are doomed to fail:
Research shows that data scientists and analysts spend 80 percent of their time preparing data for use and only 20 percent of their time actually analyzing it for business value. Without automated data harvesting and ingesting data from all enterprise sources (not just those that are convenient to access), data moving through the pipeline won’t be the highest quality and the “freshest” it can be. The result will be faulty intelligence driving potentially disastrous decisions for the business.
10. Data governance evolves to data intelligence:
Regulations like GDPR are driving most large enterprises to address their data challenges. But data governance is more than compliance. “Best-in-breed” enterprises are looking at how their data can be used as a competitive advantage. These organizations are evolving their data governance practices to data intelligence – connecting all of the pieces of their data management and data governance lifecycles to create actionable insights. Data intelligence can help improve the customer experiences and enable innovation of products and services.
The erwin Expert Blog will continue to follow data governance trends and provide best practice advice in the New Year so you can see how our data governance predictions pan out for yourself. To stay up to date, click here to subscribe.
Organizations have spent a lot of time and money trying to harmonize data across diverse platforms, including cleansing, uploading metadata, converting code, defining business glossaries, tracking data transformations and so on. But the attempts to standardize data across the entire enterprise haven’t produced the desired results.
A company can’t effectively implement data governance – documenting and applying business rules and processes, analyzing the impact of changes and conducting audits – if it fails at data management.
The problem usually starts by relying on manual integration methods for data preparation and mapping. It’s only when companies take their first stab at manually cataloging and documenting operational systems, processes and the associated data, both at rest and in motion, that they realize how time-consuming the entire data prepping and mapping effort is, and why that work is sure to be compounded by human error and data quality issues.
It’s obvious that the manual road is very challenging to discover and synthesize data that resides in different formats in thousands of unharvested, undocumented databases, applications, ETL processes and procedural code.
Consider the problematic issue of manually mapping source system fields (typically source files or database tables) to target system fields (such as different tables in target data warehouses or data marts).
These source mappings generally are documented across a slew of unwieldy spreadsheets in their “pre-ETL” stage as the input for ETL development and testing. However, the ETL design process often suffers as it evolves because spreadsheet mapping data isn’t updated or may be incorrectly updated thanks to human error. So questions linger about whether transformed data can be trusted.
Data Quality Obstacles
The sad truth is that high-paid knowledge workers like data scientists spend up to 80 percent of their time finding and understanding source data and resolving errors or inconsistencies, rather than analyzing it for real value.
Statistics are similar when looking at major data integration projects, such as data warehousing and master data management with data stewards challenged to identify and document data lineage and sensitive data elements.
So how can businesses produce value from their data when errors are introduced through manual integration processes? How can enterprise stakeholders gain accurate and actionable insights when data can’t be easily and correctly translated into business-friendly terms?
How can organizations master seamless data discovery, movement, transformation and IT and business collaboration to reverse the ratio of preparation to value delivered.
What’s needed to overcome these obstacles is establishing an automated, real-time, high-quality and metadata- driven pipeline useful for everyone, from data scientists to enterprise architects to business analysts to C-level execs.
Doing so will require a hearty data management strategy and technology for automating the timely delivery of quality data that measures up to business demands.
From there, they need a sturdy data governance strategy and technology to automatically link and sync well-managed data with core capabilities for auditing, statutory reporting and compliance requirements as well as to drive business insights.
Creating a High-Quality Data Pipeline
Working hand-in-hand, data management and data governance provide a real-time, accurate picture of the data landscape, including “data at rest” in databases, data lakes and data warehouses and “data in motion” as it is integrated with and used by key applications. And there’s control of that landscape to facilitate insight and collaboration and limit risk.
With a metadata-driven, automated, real-time, high-quality data pipeline, all stakeholders can access data that they now are able to understand and trust and which they are authorized to use. At last they can base strategic decisions on what is a full inventory of reliable information.
The integration of data management and governance also supports industry needs to fulfill regulatory and compliance mandates, ensuring that audits are not compromised by the inability to discover key data or by failing to tag sensitive data as part of integration processes.
erwin Mapping Manager (MM) combines data management and data governance processes in an automated flow through the integration lifecycle from data mapping for harmonization and aggregation to generating the physical embodiment of data lineage – that is the creation, movement and transformation of transactional and operational data.
Its hallmark is a consistent approach to data delivery (business glossaries connect physical metadata to specific business terms and definitions) and metadata management (via data mappings).
Experts are predicting a surge in GDPR enforcement in 2019 as regulators begin to crackdown on organizations still lagging behind compliance standards.
With this in mind, the erwin team has compiled a list of the most valuable data governance, GDPR and Big data blogs and news sources for data management and data governance best practice advice from around the web.
From regulatory compliance (GDPR, HIPPA, etc.,) to driving revenue through proactive data governance initiatives and Big Data strategies, these accounts cover it all.
Database Trends and Applications is a publication that should be on every data professionals’ radar. Alongside news and editorials covering big data, database management, data integrations and more, DBTA is also a great source of advice for professionals looking to research buying options.
Dataversity is another excellent source for data management and data governance related best practices and think-pieces.
In addition to hosting and sponsoring a number of live events throughout the year, the platform is a regular provider of data leadership webinars and training with a library full of webinars available on-demand.
For those looking for something a little more focused, check out TDAN. A subsidiary of Dataversity, TDAN regularly publish new editorial content covering data governance, data management, data modeling and Big Data.
Organizations have been served yet another reminder of the value of data governance for data security.
Hotel and hospitality powerhouse Marriott recently revealed a massive data breach that led to the theft of personal data for an astonishing 500 million customers of its Starwood hotels. This is the second largest data breach in recent history, surpassed only by Yahoo’s breach of 3 billion accounts in 2013 for which it has agreed to pay a $50 million settlement to more than 200 million customers.
Now that Marriott has taken a major hit to its corporate reputation, it has two moves:
Respond: Marriott’s response to its data breach so far has not received glowing reviews. But beyond how it communicates to effected customers, the company must examine how the breach occurred in the first place. This means understanding the context of its data – what assets exist and where, the relationship between them and enterprise systems and processes, and how and by what parties the data is used – to determine the specific vulnerability.
Fix it: Marriott must fix the problem, and quickly, to ensure it doesn’t happen again. This step involves a lot of analysis. A data governance solution would make it a lot less painful by providing visibility into the full data landscape – linkages, processes, people and so on. Then more context-sensitive data security architectures can put in place to for corporate and consumer data privacy.
The GDPR Factor
It’s been six months since the General Data Protection Regulation (GDPR) took effect. While fines for noncompliance have been minimal to date, we anticipate them to dramatically increase in the coming year. Marriott’s bad situation could potentially worsen in this regard, without holistic data governance in place to identify whose and what data was taken.
Data management and data governance, together, play a vital role in compliance, including GDPR. It’s easier to protect sensitive data when you know what it is, where it’s stored and how it needs to be governed.
Data Governance for Data Security: Lessons Learned
Other companies should learn (like pronto) that they need to be prepared. At this point it’s not if, but when, a data breach will rear its ugly head. Preparation is your best bet for avoiding the entire fiasco – from the painstaking process of identifying what happened and why to notifying customers their data and trust in your organization have been compromised.
A well-formed security architecture that is driven by and aligned by data intelligence is your best defense. However, if there is nefarious intent, a hacker will find a way. So being prepared means you can minimize your risk exposure and the damage to your reputation.
What’s key to remember is that these components act as links in the data governance chain by making it possible to understand what data serves the organization, its connection to the enterprise architecture, and all the business processes it touches.
Creating policies for data handling and accountability and driving culture change so people understand how to properly work with data are two important components of a data governance initiative, as is the technology for proactively managing data assets.
Without the ability to harvest metadata schemas and business terms; analyze data attributes and relationships; impose structure on definitions; and view all data in one place according to each user’s role within the enterprise, businesses will be hard pressed to stay in step with governance standards and best practices around security and privacy.
As a consequence, the private information held within organizations will continue to be at risk. Organizations suffering data breaches will be deprived of the benefits they had hoped to realize from the money spent on security technologies and the time invested in developing data privacy classifications. They also may face heavy fines and other financial, not to mention PR, penalties.
Less Pain, More Gain
Most organizations don’t have enough time or money for data management using manual processes. And outsourcing is also expensive, with inevitable delays because these vendors are dependent on manual processes too. Furthermore, manual processes require manual analysis and auditing, which is always more expensive and time consuming.
So the more processes an organization can automate, the less risk of human error, which is actually the primary cause of most data breaches. And automated processes are much easier to analyze and audit because everything is captured, versioned and available for review in a log somewhere. You can read more about automation in our 10 Reasons to Automate Data Mapping and Data Preparation.
And to learn more about how data governance underpins data security and privacy, click here.
The driving factors behind data governance adoption vary.
Whether implemented as preventative measures (risk management and regulation) or proactive endeavors (value creation and ROI), the benefits of a data governance initiative is becoming more apparent.
Historically most organizations have approached data governance in isolation and from the former category. But as data’s value to the enterprise has grown, so has the need for a holistic, collaborative means of discovering, understanding and governing data.
So with the impetus of the General Data Protection Regulation (GDPR) and the opportunities presented by data-driven transformation, many organizations are re-evaluating their data management and data governance practices.
With that in mind, we’ve compiled a list of the very best, best-practice blog posts from the erwin Experts in 2018.
Data governance’s importance has become more widely understood. But for a long time, the discipline was marred with a poor reputation owed to consistent false starts, dogged implementations and underwhelming ROI.
The evolution from Data Governance 1.0 to Data Governance 2.0 has helped shake past perceptions, introducing a collaborative approach. But to ensure the collaborative take on data governance is implemented properly, an organization must settle on a common definition.
GDPR had organizations scrambling to implement data governance initiatives by the effective date, but many still lag behind.
Enforcement and fines will increase in 2019, so an understanding of the five pillars of data governance readiness are essential: initiative sponsorship, organizational support, allocation of team resources, enterprise data management methodology and delivery capability.
Speaking of GDPR enforcement, this post breaks down how the regulation affects business.
From rules regarding active consent, data processing and the tricky “right to be forgotten” to required procedures for notifying afflicted parties of a data breach and documenting compliance, GDPR introduces a lot of complexity.
Organizations operating within the financial services industry were arguably the most prepared for GDPR, given its history. However, the huge Equifax data breach was a stark reminder that organizations still have work to do.
As well as an analysis of data governance for regulatory compliance in financial services, this article examines the value data governance can bring to these organizations – up to $30 billion could be on the table.
For some organizations, the biggest hurdle in implementing a new data governance initiative or strengthening an existing one is support from business leaders. Its value can be hard to demonstrate to those who don’t work directly with data and metadata on a daily basis.
This article examines this data governance roadblock and others in addition to advice on how to overcome them.
Businesses stand to gain a lot from a unified data platform.
This decade has seen data-driven leaders dominate their respective markets and inspire other organizations across the board to use data to fuel their businesses, leveraging this strategic asset to create more value below the surface. It’s even been dubbed “the new oil,” but data is arguably far better than the analogy suggests.
Data governance (DG) is a key component of the data value chain because it connects people, processes and technology as they relate to the creation and use of data. It equips organizations to better deal with increasing data volumes, the variety of data sources, and the speed in which data is processed.
But for an organization to realize and maximize its true data-driven potential, a unified data platform is required. Only then can all data assets be discovered, understood, governed and socialized to produce the desired business outcomes while also reducing data-related risks.
Benefits of a Unified Data Platform
Data governance can’t succeed in a bubble; it has to be connected to the rest of the enterprise. Whether strategic, such as risk and compliance management, or operational, like a centralized help desk, your data governance framework should span and support the entire enterprise and its objectives, which it can’t do from a silo.
Let’s look at some of the benefits of a unified data platform with data governance as the key connection point.
Understand current and future state architecture with business-focused outcomes:
A unified data platform with a single metadata repository connects data governance to the roles, goals strategies and KPIs of the enterprise. Through integrated enterprise architecture modeling, organizations can capture, analyze and incorporate the structure and priorities of the enterprise and related initiatives.
This capability allows you to plan, align, deploy and communicate a high-impact data governance framework and roadmap that sets manageable expectations and measures success with metrics important to the business.
Document capabilities and processes and understand critical paths:
A unified data platform connects data governance to what you do as a business and the details of how you do it. It enables organizations to document and integrate their business capabilities and operational processes with the critical data that serves them.
It also provides visibility and control by identifying the critical paths that will have the greatest impacts on the business.
Realize the value of your organization’s data:
A unified data platform connects data governance to specific business use cases. The value of data is realized by combining different elements to answer a business question or meet a specific requirement. Conceptual and logical schemas and models provide a much richer understanding of how data is related and combined to drive business value.
Harmonize data governance and data management to drive high-quality deliverables:
Promote a business glossary for unanimous understanding of data terminology:
A unified data platform connects data governance to the language of the business when discussing and describing data. Understanding the terminology and semantic meaning of data from a business perspective is imperative, but most business consumers of data don’t have technical backgrounds.
Instill a culture of personal responsibility for data governance:
A unified data platform is inherently connected to the policies, procedures and business rules that inform and govern the data lifecycle. The centralized management and visibility afforded by linking policies and business rules at every level of the data lifecycle will improve data quality, reduce expensive re-work, and improve the ideation and consumption of data by the business.
Business users will know how to use (and how not to use) data, while technical practitioners will have a clear view of the controls and mechanisms required when building the infrastructure that serves up that data.
Better understand the impact of change:
Data governance should be connected to the use of data across roles, organizations, processes, capabilities, dashboards and applications. Proactive impact analysis is key to efficient and effective data strategy. However, most solutions don’t tell the whole story when it comes to data’s business impact.
By adopting a unified data platform, organizations can extend impact analysis well beyond data stores and data lineage for true visibility into who, what, where and how the impact will be felt, breaking down organizational silos.
Getting the Competitive “EDGE”
The erwin EDGE delivers an “enterprise data governance experience” in which every component of the data value chain is connected.
Now with data mapping, it unifies data preparation, enterprise modeling and data governance to simplify the entire data management and governance lifecycle.
Both IT and the business have access to an accurate, high-quality and real-time data pipeline that fuels regulatory compliance, innovation and transformation initiatives with accurate and actionable insights.