Unbxd | E-commerce Site Search & Product Discovery Solution
The Unbxd SmartEngage Platform is an Ecommerce Product Discovery and Recommendation Platform. The SmartEngage Platform helps ecommerce companies increase conversions & improve the online shopping experience with products like Personalised Search & Navigation, Dynamic Landing Pages & Intelligent Product Recommendations.
When brands have huge product catalogs, product teams have to deal with enormous volumes of product data. When product data is maintained in spreadsheets or documents, manually correcting and/or updating huge volumes of data requires a lot of work.
Several aspects of editing and updating product data tend to be repetitive, especially because several products in a catalog share quite a few common attributes. But when brands and ecommerce companies rely on legacy systems like spreadsheets, documents, and other content management tools, product teams are forced to make edits and updates manually, in a sequential fashion. This is highly inefficient and can be a huge burden on the resources of a product team.
Unbxd PIM addresses these challenges with features that automate data correction and data editing:
Unbxd PIM allows product teams to create ‘product groups’, a logical grouping of products that share certain properties. This enables product teams to perform bulk edits. Also, tasks to edit product data can be assigned to multiple users. This way repetitive edits on multiple products can be performed simultaneously.
When a brand refreshes their product portfolio and adds new variants of existing products, Unbxd PIM allows the new variants to simply ‘inherit’ all the common properties. This simplifies the process of adding new products.
Unbxd PIM lets product teams define product ‘categories’, which establish templates of product properties and datatypes. When suppliers are tasked with uploading product data for a product that is under a category, they are required to follow the template. This ensures that the data entering Unbxd PIM is accurate, so the product team need not check for and correct errors manually.
Teams can also define ‘dynamic groups’ by specifying certain criteria. Any subsequent products added to Unbxd PIM that meet the criteria, automatically become part of the corresponding dynamic group. For example, a dynamic group ‘Christmas Sale’ can be created with the condition that ‘color’ should be ‘red’. Any new product variants with the color ‘red’ will be automatically added to this group. This is especially helpful during the peak shopping seasons, as product teams can prioritize the product catalog for the sale.
When preparing product labels for an important sale, product teams can certify multiple products or product groups as ‘network-ready,’ i.e. the product content satisfies the property mapping requirements of a given sales channel. This allows such products/product groups to be exported to sales channels, so that the product information gets in front of shoppers in time for the sale.
Unbxd PIM’s automation and bulk edit features enable product teams to handle large product portfolios and high volumes of product content with ease.
Up next, we’ll be looking at product data transformation.
Each task involved in managing product data is handled by a product team member with a particular skill set, and roles are defined accordingly. For example, a content writer drafts the product description and a designer would need to update product images. If the product team works with a freelance photographer, he/she provides the product images.
Managing roles for team members, creating and assigning tasks are essential functions of product data management. Also, as we saw with imports, collaborating with external stakeholders creates additional challenges for role and task management, such as:
Suppliers/vendors often send product files riddled with erroneous data. Product teams have to correspond with suppliers via long email threads to get product data updated. Managing tasks this way is cumbersome and inefficient.
Using email to share product information with external stakeholders can lead to multiple, redundant copies of product data which are prone to errors, as product teams and suppliers are forced to repeatedly upload/download files from emails.
To control access to sensitive product data such as pricing when sharing files with external stakeholders.product teams often maintain a separate set of files that exclude sensitive data, resulting in further redundancy.
Unbxd PIM addresses these challenges with convenient features that enable product teams to collaborate with all stakeholders involved in managing product data, in real-time:
Product teams can invite external stakeholders such as suppliers and freelancers to join their PIM organization by email. This allows them to set up their account and have their own logins and collaborate in real-time within Unbxd PIM.
With Unbxd PIM, everyone has access to one central repository of data where they can edit or update existing data, or add new data as well. This negates the need for repeated file uploads/downloads and avoids redundant copies of product files.
Unbxd PIM lets product teams have granular control over access permissions to product data. Administrators can define read/write/manage permissions based on individual users, or based on ‘roles’. For example a content writer can be given ‘write’ permission for just the ‘product description’ property, but might have ‘read-only’ permission for all other properties. Similarly, a freelance photographer can be assigned ‘read/write’ permission for the ‘product images’ property alone, and can be restricted from accessing any other product data.
Product teams can create and assign tasks, based on roles or by members. Tasks such as adding/updating product data can be delegated to suppliers, since they know the product information in detail. This allows product teams to focus on more pressing matters, such as enriching product content with digital assets. Unbxd PIM sends email alerts when tasks are assigned, and also when tasks are completed. This makes delegation and management of work a lot more efficient.
By facilitating easier collaboration, Unbxd PIM makes product teams more efficient at managing product data which helps them get the right content in front of shoppers a lot faster. This helps brands and ecommerce companies realize higher sales revenues.
Up next, we’ll be taking a look at Product Management.
The product data management process often begins as a bunch of fields and columns in spreadsheets. This raw product data needs to be put through the wringer of a PIM tool before it can find its place on product listing pages on ecommerce sites.
Take for example a global sportswear brand that makes kits for international football competitions and gets its product inventory from suppliers in Tirupur, a small industrial town in the south of India. Or a computer brand that sources storage components from Thailand, display panels from South Korea, and plastic casings from China.
The detailed product information that shoppers see either on these brands’ websites or on those of channel partners like Amazon and Walmart needs to be sourced from suppliers operating out of India, China, Thailand, and South Korea. Complex sourcing ecosystems like these create challenges with product data imports. For example:
Suppliers can be distributed far and wide across the world. When sourcing files via outdated mechanisms such as emails and/or calls, delays occur as product teams have to deal with the differences in time zones and language.
Brands find it difficult to ensure the accuracy of product data. This means the product files that end up on the desk of the product data team need to be checked and validated by a cumbersome, manual process.
Brands may not be able to enforce consistency in the formatting of product files coming in from their suppliers. The files need to be reconciled to one standard format for internal use.
Maintaining product data in spreadsheets and documents often means members of a product team maintain multiple copies on each of their local machines. Distributed, redundant copies makes it harder to keep track of the latest, most updated version. The wrong file could get sent out to sales channels, which means shoppers could see incomplete and/or inaccurate product information.
Collaborating with suppliers/vendors to update product data over email and/or calls can get tedious and time consuming. This becomes especially critical when product teams need to keep tight deadlines during a peak shopping season, such as Black Friday.
A cloud-based PIM tool like Unbxd PIM can make imports a lot easier to handle for a product data team:
Suppliers can be given their own logins, so they can directly upload product information into Unbxd PIM.
Unbxd PIM creates a central repository of product data that all members of the product team can easily access and update, ensuring that there’s just one up-to-date version of the product data.
Unbxd PIM can seamlessly extract product data from product files, no matter which format they are in. Product teams can simply upload the files as received from their suppliers, and the data is imported into Unbxd PIM.
With Unbxd PIM, product teams can automate data correction, saving time and manual effort.
Unbxd PIM can facilitate real-time collaboration between the product data team and external stakeholders like suppliers/vendors. This can speed things up substantially, compared to back and forth over long email threads.
Unbxd PIM can help brands and ecommerce companies manage product data imports with ease by using AI and ML-based automation features. By using Unbxd PIM to manage their product data, product teams can efficiently handle large volumes of product data, enrich it with digital assets, and publish it to multiple sales channels easily.
Up next, we’ll be taking a look at organization in Unbxd PIM.
Given today’s vast amounts of data and user queries, making e-commerce search engines more efficient is an extremely challenging problem. In part 1 of this blog, I wrote about why e-commerce businesses must explore entity extraction as a way to increase the relevance of their search results. This follow-up blog goes into how e-commerce companies can put entity extraction and machine learning (ML) into practice, and some advancements that Unbxd brought to this area in the last year.
What is entity extraction? Here’s a primer: consider the sentence, “Cindy bought two Levi’s jeans last week.” Using this input, we can highlight the names of entities: Cindy [person] bought [action] two [quantity] Levi’s [brand] jeans [category] last week [time].
Another example: take the query “Black leather jacket,” from which color_name, pattern, and category_type are the “named entities” recognized. We also know that black, leather, and jacket are the values of these entities.
Today, training and building ML model(s) entails massive challenges:
1. Challenges with getting data
2. Generating high quality labeled data
3. Optimizing algorithms and architecture to deploy ML models and deliver business results
Challenges with getting data
Data is the backbone of named-entity recognition or NER. Over several years, Unbxd has aggregated massive amounts of e-commerce clickstream and user behavior data. Our commitment to innovation has started with how we have built the Unbxd data layer: we crawl open source content like Wikipedia, conceptNet, and social media, and we combine it with a special data set built from scanning 100K sites. (This data set is so large that we call it the world catalog!)
Generating high quality labeled data
NER models will require user behavior data and catalog data to generate high quality labeled data that will train NER-based machine learning models, which have the capability of understanding entities in a search query. This understanding can be combined with existing implementations and provide a richer experience to shoppers.
Obviously, the more relevant data that we provide, the better the algorithms become. So, the first step was to generate high quality labeled data from historical user behavior data and product catalogs. But collecting, labeling, and maintaining huge amounts of historical data on which entity extraction algorithms can be trained is one of the biggest challenges for e-commerce enterprises.
While we were using a bunch of models to run and optimize entity extraction modules in the past, our labeled data set generation part was largely manual. There were arguments on quality and whether algorithms can provide as good a labeled data set as a human-labeled data set. In 2018, we introduced intelligent tagging models to generate labeled data for different domains like fashion and accessories, home and living, electronics, and mass merchants. The results have suggested 94% accuracy for the algorithm-driven models as compared to 97% for human-labeled models. We have seen minor degradation in quality, however; these models have set us on the path to achieving truly scalable entity extraction models – both from a training and testing perspective. Further, we see continuous improvement in terms of accuracy for algorithm-driven models and believe that soon these models will overshoot the accuracy of human-driven models.
As of now, for generating labeled data we combine historical clickstream data, product catalog data at the category level, and a bunch of derived parameters along with site-level configuration data to generate a statistical model which predicts labels for search queries with a certain confidence. We discard the entities that have confidence below a certain threshold and take high confidence tokens and search queries from here.
This labeled data will contain all “named entities” or concepts that you need to extract from the search query and the possible values of these entities, as per historical search queries and product descriptions.
Once we have labeled the data set, the NER based machine learning models are trained on this data set.
Optimizing algorithms and architecture to deploy ML models and deliver business results
Creating machine models that get a smarter understanding of queries is one thing, but making those models available for delivering more relevant search results is another. I’m happy to write about Unbxd’s contributions to this in 2018.
We built our system to consist of two major modules: one of them is the Entity Extraction API that serves the extracted “entities” from a search query to the front end, while the Model-learning System evolves the learned model using clickstream data, derived parameters, and product catalog.
The two components are
1. Entity Extraction API: The Entity Extraction API takes the query and client key as input and will return entities for that search query with a certain confidence. Each API host talks to the storage layer to fetch the latest machine learning model, as they continuously evolve
2. Model-Learning System: This module uses the pre-built labeled datasets and trains them for general use-case. Output model is made available via the API.
Recall and Precision
Recall is the ratio of “the number of relevant products retrieved” to “the number of relevant products in the catalog.” Precision is the ratio of “the number of relevant products retrieved” to “the total number of products retrieved.” Essentially, high precision implies that an algorithm returned a higher number of relevant results, while high recall means that the model returned most of the relevant results.
For the search query “blue jeans,” if the catalog has 100 relevant products and the search retrieves 160 products out of which 80 are relevant, then recall is 80/100 = 0.8. Precision in this scenario is 80/160 = 0.5.
Instead of Recall and Precision, most of the models in Information Retrieval domain are measured by a harmonic mean of Recall and Precision, denoted by F1 score.
Our goal in 2018 was to use more data points from the product catalog, search queries, and click-stream data to generate NER tags for the historical queries of a client from a specific domain. We then want to generalize this understanding via a machine-learned model, so that given new search queries, the model can make accurate tag prediction for various phrases in the query.
Simply put, NER models solve for a sequence-to-sequence labeling problem, where given training data in the form of input sequence [Xa, Xb,..Xn] and its corresponding labels [Y1, Y2, ..Yn] we learn a model such that for a new input [Xx, Xy,…Xn], the model outputs the predicted labels [Yx, Yy, ..Yn].
A unique challenge in e-commerce is that the search queries are pretty short and less structured than in web documents. (This is a major reason why out-of-the-box NLP solutions are available for web and document search, but not e-commerce.)
As noted above and as a general case, we wanted to use clickstream and/or the catalog to generate the training data. We tried multiple approaches to generate the model, and here are some models that we found competent with NER:
1. SpaCy: This NER model uses transition learning model coupled with convoluted neural networks (CNN). In this model, the input query passes through multiple states, where a decision is made at each state to generate the label for that state. As it is a neural network based solution, the model takes a significant time to train, and the performance of the model was lower than other approaches. The python based free and open source NLP library SpaCy is used for this model. So for a sample query, “Lenovo mouse,” SpaCy would predict “Lenovo” as the “brand” or “company” and “mouse” as the “product.”
2. Stanford coreNLP NER tagger: This Java implementation of NER uses Conditional Random Fields (CRF), and thus it is also known as CRFClassifier. The model maximizes the conditional probability of tagging the entire query as per the training data. Since the CRF models try to maximize conditional tagging, the recall is less unless we have huge datasets.
3. Stanford MaxEnt POS Tagger: This model uses a maximum entropy-based tagger and is similar to the CRF. Though this model tags the query more liberally (hence the maximum entropy), causing some biased tagging, it has high recall.
Some other models that we have tested are Hidden Markov model and SVM based models. Apart from experimenting with different models, we also assess the various implementations of the model and chose the one which gives the best result. For example, with the Stanford MaxEnt Tagger, Unbxd chose Apache OpenNLP library over Stanford NLP library due to its better developer-friendly interface.
NER model A/B testing results for few of our customers:
Initial results have shown a significant conversion uplift of around 4.83% (2.69% to 2.82%) over our current models.
An important point: As Unbxd continuously tests various NER models to find the best ones and optimize them for specific domains, we have found that a one-size-fits-all approach does not work. Domains are different, and what works in fashion may not work for electronics, so experimentation is key to find out what works best for our customers.
Unbxd works relentlessly to help e-commerce websites optimize their search engines. If you have any questions or want to understand how you can leverage entity extraction to deliver more relevant search results, please reach out to me at email@example.com.
According to Statista, in the US, the $447B US ecommerce revenue was 36% of the total retail revenue in 2017. That means that Americans buy at least one in every three items online. And the most common way most shoppers try to find what they’re looking for, not surprisingly, is search.
Are ecommerce search engines doing the best that they can?
For example, when you search for “Bluetooth headphones from $100 to $200” on Amazon, the query returns two products, and one of them costs $249.
Other ways even the most popular ecommerce websites fall short: they fail to support product names (that are clearly listed on the product pages), don’t understand spelling mistakes, don’t process synonyms (understanding or nomenclature differences), can’t grasp themes or subjective qualifiers (keywords such as winter dresses, cheap, or in fashion), can’t manipulate symbols and abbreviations (such as feet when the site uses ft), etc.
Let’s look at some simple search query types at the largest ecommerce companies.
Product type searches — A query “30 in laptop” on Amazon shows “30 inch” laptop tables in the first two results. As another example, a search for a 16 inch laptop brings up laptop cases.
Feature Searches — A search query that involves a feature of the product.
While searching for a “cheap red evening gown” on eBay, you would find a black dress in the second position, though the website sorts the listing by best match and has many other red dresses to show.
Subjective searches — A query that involves a subjective preference of the customer. For examples when you search for a “high-quality sofa” on Best Buy, the top result is an unreviewed sofa, and the second result is a water cooler.
Spelling mistakes or nomenclature differences – When the query has a spelling mistake, most websites show zero results. For example, when you search for “handwash” on BathAndBodyWorks.com, the result page says that there is no such product — though the seller has many hand washes which it displays only when you search with the correct spelling “hand wash.”
Even when the shopper mentions a specific feature, product type, preference, or by mistake enters a slightly incorrect spelling, she either gets unrelated results, or the site returns empty-handed with no products.
Why aren’t even the largest websites able to show relevant results?
Here’s one reason: as the number of searches done has increased, so has the myriad ways in which users search. Shoppers from different geographies have their own requirements, nomenclature, and subjective descriptions, that they put in the query — thus resulting in a plethora of search queries for the same product.
The enormous size of today’s product catalogs also poses a challenge to websites to categorize the products and label them with all possible keywords. Amazon alone sells almost 480 million products in the US.
The combination of vast datasets, large types of search queries and a large number of products makes the exact matching of a user requirement with a relevant set of products extremely complex.
How to improve query results:
To improve query results, search engines have to become more intelligent and adaptable to understand the intent behind a user’s query. Rather than simply reading the words literally, search engines must associate each query word with an intent, thus forming a meaningful phrase from the query to show appropriate results.
Many of today’s search engines leverage machine learning models and natural language processing (NLP) techniques to optimize search results. Among these NLP methods, entity extraction, a key technique that employs context sensitivity, can improve search results significantly. What is Entity Extraction?
Entity extraction, or Named-Entity Recognition (NER), scans search queries to identify and classify words or phrases into predefined categories, such as names of people, brands, products, locations, styles, colors, quantities, monetary values, percentages, and many other features. These predefined categories (mostly) represent real-world objects and are described by proper nouns.
Let’s consider the search query “latest black plaid sweater dress.”
Here the product attributes or features are latest, black, plaid, and sweater dress. The output of NER may be “latest black_color plaid_pattern sweater_dress_category_type”; where color, pattern, and category_type are the predefined categories, and black, plaid, and sweater dress are their values, as shown in the diagram below.
In this process, NER algorithm extracted the entities black, plaid, and sweater dress and put them into their respective categories.
For example, in the query “Calvin Klein shoes,” the NER model may identify “Calvin Klein” as a brand name and “shoes” as a product type.
Similarly, in the query “brown shoe polish,” shoe polish should be extracted as one compound entity which is the product type here. If the entity isn’t recognized as a compound token, then results may contain shoes, nail polish, or anything that matches with the individual keyword. Entity extraction plays a key role in identifying the phrases and avoiding possible irrelevant results for the end shopper.
NER is the initial step in the search algorithm. The entity extraction model finds the significance of words in a search query to understand the users intent, with respect to a specific product catalog, while using historical data points. Then further algorithms are applied to this query. Why should you consider NER:
At least 10-20 percent of all search queries result in zero products. These lost in search queries implies a minimum 20 percent revenue loss. And as we have seen, apart from zero result queries, a lot of queries have low recall. These low recall rates push the customer to leave the website without buying the product.
This blog was to provide an introduction to entity extraction to a business audience. In part two, I will be covering how Unbxd brought multiple innovations to this area in 2018. Stay tuned!
We’re excited to announce Unbxd PIM, a shopper-focused Product Information Management (PIM) solution for ecommerce companies.
Great product content that is complete, accurate, and rich in product information can significantly increase sales. Consider these findings from a survey of online shoppers, which underline the importance of product content:
26% of online shoppers consider product information to be ‘the most important attribute’ when deciding which website to buy from.1
Over 87% believe clear product images to be essential for ‘a great shopping experience’.2
Over 46% of online shoppers say they research a product digitally before making a purchase.3
But delivering product content that catches the attention of online shoppers can get challenging: many brands and ecommerce companies manage catalogs of tens of thousands of products, each with as many as 200 attributes. Then consider the numerous SKUs and variants for each product. Dealing with high volumes of data is just the tip of the iceberg when it comes to product data management. This challenge only gets exacerbated during peak shopping seasons, when companies need to run aggressive promotional sales and get their updated and accurate product catalogs in front of customers at lightning speed.
Not only is the product data high in volume, there is also a complicated product data ‘supply chain’ that brands and retailers often need to navigate — collating data from various sources, checking the data for errors and validating it, supplementing product content with digital assets like images and videos, and finally distributing it to resellers.
That’s where Product Information Management, or PIM, comes into the picture. A PIM tool can streamline the entire product data management process and be the single source of truth for all of a brand’s or ecommerce company’s content needs: from sourcing product data from vendors and suppliers, to creating and maintaining product data, to enriching it, and then distributing it to partners and marketplaces.
You might already be using a PIM system, or a number of different spreadsheets/documents (the ‘chicken wire and duct tape’ approach), but chances are there are a number of issues with that system that are keeping you from engaging with your customers in the best possible way. Your workflow might be based on legacy tools like spreadsheets, your product data could be in silos, and bottlenecks like email-based approvals could be slowing things down — all hurting your ability to get products out to your sales channels on time.
A cloud-based PIM tool can solve all of these problems for your business, and more. The right PIM tool can make your workforce a lot more productive, making them capable of managing larger product portfolios with a higher number of product attributes. More productive teams mean faster product launches, leading to higher sales and revenue.
A PIM solution addresses the key challenges of product data management:
Sourcing and consolidating product data from various suppliers can be challenging, as the data comes in in different file formats. Product data may be stored internally across multiple tools, resulting in silos and data redundancy.
Raw product data needs to be enriched with digital assets such as images and videos, and legacy systems like spreadsheets do not handle digital assets well.
Brands and online retailers can find it challenging to manage their partners, to check their product information for export readiness, and to adhere to the unique content format requirements of multiple sales channels.
Why Unbxd PIM?
With more than a handful of PIM solutions already in the market, why did Unbxd invest in another one? The answer lies in Unbxd PIM’s AI and machine learning-driven automation engine that connects products with shoppers faster, while preventing errors in product information. Request a demo today with our product experts to learn how Unbxd PIM can help your business scale and grow.
Starting next week, we’ll take a deeper dive into how Unbxd PIM addresses each challenge of product data management.
Online merchandising in its most basic sense is quite simple – it’s about getting the right products in front of the eyeballs of the right people. In practice, however, it is quite the tall order.
Modern online merchandising tools try to make life easier for merchandisers and marketers. They simplify boost and bury campaigns creation and provide insights that can influence conversions. A fundamental feature of all modern merchandising tools is displaying product ranking for search queries and pages, which can be used for campaign strategy.
However, insights such as ranking criteria, relevance scores, and other finer metrics aren’t always easily available. Such granular insights can help drive merchandising decisions that make the difference between success and failure.
We’re excited to announce the availability of these metrics in Ranking Insights.
What is Ranking Insights?
Ranking Insights is an enhancement to the existing merchandising campaign workflows of Unbxd Search and Unbxd Browse.
Ranking Insights is a window into the advanced insights for search and category pages. It displays comparative performance of hits, clicks, add to carts, and orders for each search query and category page. Ranking Insights also breaks down the performance of related synonyms, phrases, and concepts (for Unbxd Search only), and it displays the various Filters, Sorts, and Boosts applied by the merchandiser to a specific query.
Merchandisers who wish for more granular data can also view the above data points for individual products. Product scores on query relevance (for Unbxd Search only), popularity (query level for Unbxd Search and site level for pages), and dynamic scores based on boosts and burys are visible for individual products, and the Clickstream View makes it easy to compare the performance of multiple products from a single window.
Why did we build Ranking Insights?
At Unbxd our belief is that data is central to the success of any product discovery activity. Ranking Insights makes performance data accessible and actionable. Better understanding of the logic and reasoning behind product rankings will help online merchandisers and marketers develop strategies and campaigns that contribute to conversions.
How does Ranking Insights help?
Ranking Insights can help merchandisers and product teams:
Decrypt product ranking: See why a specific product or a set of products are ranked in a particular order, or why a certain product does not rank higher despite being a bestseller for specific search queries
Interpret boosting impact: Understand how low, medium, and high levels of boosting affect product positions in pages and fine-tune boosting for desired results
Measure campaign results: Compare product sequences pre- and post-campaign or validate live rankings such as zero search results for search queries
Understand the effects of merchandising: Identify if page rules or site level rules are limiting certain products from appearing in search results
Ranking Insights for Search
Follow these steps to get to the Ranking Insights screen for Search.
1. Login to your Unbxd Dashboard. On the main menu, under Merchandising, select Commerce Search.
2. Ensure the Query Rules tab is selected. Enter the search query for which you wish to see the ranking insights. Select a campaign from the list of campaigns that appear. The search query entered needs to be part of either an active or a paused campaign. If no search campaigns exist, Ranking Insights can be accessed, by creating a non-working Search campaign.
3. Click Edit Campaign
4. Continue past the Campaign Details page and into the Merchandising tab. Select the Continue Merchandising option
5. Select Ranking Insights on the top right of the screen to see the query performance metrics
6. Toggle Clickstream Data on. Point to any individual product and select the Insights option to view performance data for individual products
Ranking Insights for Browse
Follow these steps to get to the Ranking Insights screen for Browse.
1. Login to your Unbxd Dashboard. Select the Products dropdown and choose Browse. On the main menu, under Merchandising, select Browse.
2. Ensure the Page Rules tab is selected. Enter the page campaign for which you wish to see the ranking insights. Select a campaign from the list of campaigns that appear. A page rule needs to be part of either an active or a paused campaign. If no page level campaigns exist, Ranking Insights can be accessed, by creating a non-working Page Rule campaign.
3. Click Edit Campaign
4. Continue past the Campaign Details page and into the Merchandising tab. Select the Continue Merchandising option
5. Select Ranking Insights on the top right of the screen to see the page performance metrics
6. Toggle Clickstream Data on. Point to any individual product and select the Insights option to view performance data for individual products
For queries or to see a detailed walkthrough of Ranking Insights, contact your Unbxd Customer Success Manager.
By definition, Product Information Management or PIM is a systematic approach to organize all the data an organization has or uses about their products. Last week I discussed how organizations have the ability to make experiences better by adding discipline through enhanced use of their Product Lifecycle Management (PLM), in this article we will discuss how using the train depot of PIM can drive more effective use of product information when machine learning or artificial intelligence gets involved.
At its core, PIM systems take the data an organization has about its products and organizes it within a system for use on eCommerce platforms. On a broader level, a PIM can activate ancillary systems that are in use such as Site Search, Recommendations or Personalization engines, Outfitting, Fit algorithms, etc. etc. These systems are typically driven by complex algorithms to enhance the customer experience and ultimately drive conversion. The PIM is the restaurant that will serve up the food for the AI-driven mechanism to do its work.
Let’s dig a bit deeper. In the PIM typically, the data within it ranges from the ordinary, such as Product Descriptions and Summaries, to the more complex, such as category relations or meta data relations. Taking this arsenal of data, the PIM can directly interface with the eCommerce platform but also via API or data-feed can populate the AI-based systems.
Let’s use Site Search and Out-fitting as an example. The PIM system is going to populate the eCommerce platform with base level product data that it has stored to allow a customer to purchase the product. We could stop here, and a customer would be able to purchase a product, but to really enhance the experience let’s go further. Sophisticated, AI-based site search tools will also interface with PIM to get as much information about the product that it can and begin the learning process as customer engage with site search to find what they are looking for. While this is going on, PIM will also interface with outfitting to pull together the appropriate outfits based on the criteria and relationships establish within the PIM. Pairing a hat, gloves, and scarf with a jacket and pants, for example completes the outfit. The smart Site Search program will then take the input from the customer and not only return results for “Winter Jacket” but also return the completed outfit that was funneled from the relationships of product that were established at the PIM. As more customers, search and browse the site the smarter the Site Search and Outfitting machines get, and the conversion rises. All based on the fundamental data that is organized and maintained with the PIM.
So ultimately, as companies are building out their eCommerce eco-system they want to think about how the hierarchy of needs progresses. What I mean by this is they all sell products or services of some kind. In order to make the most of that they want to get the information about those products as organized as possible. A PIM will do that. The product then goes to the website for customers to find and purchase the product. On top of that, the company will seek out and purchase the coolest and fanciest tools to make finding and purchasing even easier, faster, smarter, etc. It starts with the PIM and makes its way to the cool stuff, i.e. AI and machine-learning systems, quickly. Bingo-bango just like that you have a sophisticated ecommerce eco-system that marries the diligence and efficiency of Product Information Management with the wonders of Artificial Intelligence.
I hope you found this insightful and see you again next week.
A long time ago while sitting with then Lands’ End CEO Edgar Huber I remember him saying something that stuck with me till this day, [sic]”Joel, you can make the best website in the world, but if the product is no good it won’t matter”. It was a magical phrase that made me look at how I envisioned my websites, and especially with apparel, completely differently. Undoubtedly, as an e-commerce and digital marketing leader with a heritage steeped in User Experience and implementing Site Search systems it was and is my job to create and optimize engaging web experiences. It is all about the customer, and making the experience as efficient as possible. But, as powerful as that sounds, they come to the website for one thing, the product. It’s customer, product, experience. Do these right, and the world will beat a path to your door.
So, I am sure you are thinking that’s an “interesting” opening, but what does it have to do with Site Search and PLM? Let’s discuss. Products as they are designed and on-boarded are typically first conceived from technical designers who input product design specifications and attributes into the PLM. The PLM is also touched by people in Sourcing to ensure that the goods or materials arrive one time. Ultimately merchants touch the product in the PLM as well to ensure the attributes are correct, the descriptions, the pricing, the inventory, etc.etc. In some cases companies also may introduce the Product Information Management (PIM) system to be a transfer station between the PLM and the eCommerce systems. PLMs are complex beasts with multiple unique touchpoints from various groups. Let’s hold this thought for a minute.
In an ever-increasing mobile world, consumers are using their hand-held device to shop on the internet. Take that a step further and due to ease of use, those same people will use site search to find product anywhere between 10 and 30 percent of the time. Todays site search mechanisms are extremely complex machine-learning based tools that take inputs to algorithmically derive outputs to give customers a result set. It’s quick, it’s efficient, everything you want in a web experience…..if it works and gives the customer what they want. And this is where the marriage and mindset begins.
How many customers, real customers, use the term “wovens”? That would be zero. With that in mind within the apparel world there is seemingly no word more used to describe a category of clothing. That’s not a bad thing, it’s just not a customer thing. Customers will enter, “polos” or “sweater” or the like. Those are human terms, and site search works and optimizes on human terms. Site search use artificial intelligence to learn in human terms. That’s the thing, systematically, most if not all site search systems take a data input of some kind (feed or API) from a product catalog that is/or should be based on data in the PLM, which could go to a PIM as a transfer point before it hits eCommerce. The Site Search engine then consumes that information and learns how customers interact with that data. Pretty cool, right?
It’s thus incumbent on the retailer to make sure they are speaking in human terms, because for the sake of their customers the machine is trying to learn what it should render. Far too many retailers inefficiently burden the Site Search tool by putting over-rides or rules within the tool while the information about the product should be introduced where the product is built and defined, the PLM. Starting early makes everything easier. I believe this is starting to take hold within the industry, but I can’t emphasize enough how updating things and formatting things correctly in the beginning allows the interfaces to Site Search and eCommerce to run more effectively and learn more quickly and thus enhancing customer experience. Ultimately, increasing revenue. Another example would be denim, or as most humans would say “jeans”. Problem is you got boot cut, skinny, slim fit, loose fit, hip hugger and on and on and on. Making sure your Site Search knows how to consume all these different varieties is key to customer satisfaction and conversion. Ask yourself, easier to build a ton of logic in a Site Search system, or let the PLM drive the attribution that is simply consumed by Site Search. I think you know my point of view.
What I am simply trying to convey, is the relationship between great products and great technology will drive customer satisfaction. Make things easier on yourself early and get all teams thinking in customer and human frameworks and you can see how a downstream system can help coordinate and enhance the front end experience.
And hopefully I’ve helped you understand the correlation between Site Search and PLM in our crazy ever digitizing world.