Loading...

Follow MongoDB Blog on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Playback is where the MongoDB blog brings you selected talks from around the world and around the industry. Here, we continue show-casing the great talks at MongoDB World 2018 from people running MongoDB in production and at scale.

The Network Assurance Engine team at Cisco Systems have a lot of experience with time series data and MongoDB. Larger network fabrics generate 12 million time series data points every hour and all of those data points need to be analyzed to ensure that the network and its applications are working correctly. In Gabriel Ng and Tom Monk's talk, MongoDB for High Volume Time Series Data Streams, presented at MongoDB World 2018, they show how they took on that data challenge using MongoDB as their time series database.

MongoDB for High Volume Time Series Data Streams - YouTube

With a large amount of data collected - hundreds of millions of event documents - the analysis process needs access to at least several hours of contextual data. With that much data, it can mean indexes can exceed the available RAM. As an added constraint for the team, they needed to stay within a small resources footprint for their database, yet retain fast write throughput and the ability to view older data without

This led the team to focus on optimizing their data model for time series data and by making the timestamp part of their collection index, it allowed them to safely keep only the recently used parts of indexes in memory. Further optimizations came from date partitioning the time series data and working with MongoDB support to get the best performance from the underlying operating system.

The talk dives into these and other elements of enabling Cisco's Network Assurance Engine team to analyze terabytes of time series data by making use of MongoDB's flexibility and versatility when it comes to indexing data.


See more talks like this - in person - at MongoDB World 2019.
Register Now!
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Being a Sales Development Rep, or SDR, is often a first step to jump-starting a career in sales. At MongoDB, we have a culture that celebrates differences, fosters growth and enablement, and ensures that we provide our SDRs with the tools and the confidence that they need to grow their careers.

MongoDB’s “SDR Series” explores the growth of our SDRs, whom have created unique career paths based on their interests, skills, and passions. We love our SDR reps and know that with our clear development plans for success and promotion, they will all likely be onto the next step in their career in no time.

In this post, you will meet Ed Liao, who started a pilot project at MongoDB that was incredibly successful, and will now be moving from Austin, TX to Sydney, Australia to grow MongoDB’s business in ANZ.

Ed Liao, Senior Account Development Representative, Australia & New Zealand, Sydney, Australia

Since I first joined MongoDB, I have grown my sales career exponentially and am relocating to Sydney, Australia soon to further build our SDR program! Before joining MongoDB in October 2017, my previous role was a sales role, but in a completely different industry. Although it was a great stepping stone in my career, I was never passionate about what I was selling and wasn’t working in an environment of collaboration and creativity. I joined MongoDB because I saw that I could work with like-minded people with the same passion and enthusiasm. I could also finally sell a product that is fascinating with extremely strong market potential. I knew that MongoDB would give me the opportunity I was seeking in sales, to be able to drive business growth and develop the skills I needed to be successful in the long-term as a rep.

Launching sales development in Australia & New Zealand

My MongoDB career growth has been extraordinary. I started as an SDR working primarily inbound leads and opportunities for the US and LATAM markets. Six months later, I was consistently exceeding my targets and promoted to Senior. As a senior member of the team, I was approached to pilot SDR efforts and become the first dedicated SDR for Australia and New Zealand. Through this incredible opportunity, I was able to build a new sales development model from scratch while continuing to be successful in exceeding metrics and expectations. The opportunity has made a significant impact on my career because I will now be permanently relocating to Sydney to continue to lead the SDR efforts in the region and further build out for APAC.

The market has had a need for a dedicated SDR for some time now. However, because of the time difference, it’s difficult for non-dedicated SDRs based in Austin to efficiently drive pipeline generation for the region.Through my promotion, I felt further driven to really do more for the company and myself than just inbound opportunities, and my manager, Gigi, provided me the opportunity to support the ANZ team while still being based in Austin, completely shifting my work hours to match Sydney. I loved that this new role was a completely new territory model for an SDR, and I had the freedom to build this plan myself, of course with Gigi and Andrew Amato’s (SDR team lead) help.

The new role was daunting at first because of the uncertainties of success, working off an untested model. However, the coming results after one quarter surpassed everyone’s expectations - even mine - through incredible pipeline and revenue generation for MongoDB as a result of my efforts. I finally feel that I am truly a valuable extension of the enterprise team in ANZ, and I can use my own strengths to think outside the box in how I drive strong opportunities and deals for the enterprise reps. Furthermore, my work and territory coverage around our Atlas, database as a service cloud platform, serves as a great stepping stone and long-term development opportunity that can lead directly into my eventual closing role.

Professional growth and development at MongoDB

The development growth I’ve received from Gigi, Gavin Jones (Regional VP of ANZ) and the ANZ enterprise reps has been incredible. I’ve learned more in the past 6 months in ANZ than in my entire career. I finally feel that the gap between an SDR and closing role is decreasing, and I can continue to gain the skills and build the confidence necessary to be successful as a rep. Sydney will provide a priceless experience with strong career development that I don’t believe would get as early in my career otherwise. The relocation makes me incredibly excited and motivated to drive business on the ground with the ANZ team and continue to push myself forward in my career.

MongoDB is an extremely fast-growing company, and through the success of my peers and myself, has incredible expectations for the SDR org. The company and its leaders value SDRs in ways I have not experienced elsewhere. SDRs have incredible responsibilities and expectations to drive real value towards the business. Although this can be intimidating, young professionals seeking a career in sales have the opportunity to develop themselves, learn what it takes to be successful in a fast-paced business and drive incredible results. At the end of the day, MongoDB will help accelerate anyone’s career in sales.

Interested in pursuing an SDR role at MongoDB? We have several open roles on our SDR teams in Austin and in Dublin, and would love for you to build your career with us!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In honor of this past Mother’s Day, we want to take the time to highlight some of the amazing mothers who work at MongoDB. Meghan, Ozge, and Lauren share their experiences about what it’s like to be a working mother and how motherhood has impacted their perspectives on life.

Meghan Gill, VP of Sales Operations, NYC

“There is nothing more humbling than becoming a new mom. There is so much to learn about parenting and it can be overwhelming. Being a working mom means constantly juggling to fit family time alongside work commitments like QBRs and business travel. I feel fortunate that MongoDB is a flexible employer. We all work hard and everyone is understanding when it comes to daycare pickups, pediatrician appointments, and other parenting "stuff" that comes up. I feel extremely lucky that MongoDB offers a service called Cleo that has all kinds of services and education for parents, as well as the private Slack channel for pregnant women and mothers to help each other through transitions. I want to be a good example for my daughter as a woman in leadership and MongoDB is allowing me to do that!”

Ozge Tuncel, VP of Customer Success and Sales Development, NYC

“My daughter, Defne, was born in November 2016, and becoming a mother truly transformed the way I view the world and how I look at achieving a work-life blend. After becoming a mother, I leaped into another level of productivity and efficiency because I really didn’t have another choice. I needed to juggle global teams and be a fully present mother to my daughter. I became a master at prioritizing personal and professional goals, I learned how to delegate better, and gave up control in certain areas, while I gained control in others. My whole approach to life changed because I had a new motivation to succeed in my career, and in my personal life. I want to set an example for my daughter. I now have a new responsibility and am passionate about being a role model for Defne. I want her to know that she can achieve her dreams, and doesn’t have to follow specific gender norms or make trade-offs. She should not be afraid to test her potential and should give her dreams a fair chance to become true.”

Lauren Schaefer, Developer Advocate, Remote, USA

“I love my role as a Developer Advocate and the travel opportunities that come with it. Speaking at conferences and interacting with developers is one of my favorite parts of the job. With a three-year-old daughter and a husband who also travels occasionally on business, our schedule gets a bit hectic. My manager has been incredibly supportive of me whenever I've said that I need to reduce the amount I'm traveling or that I'm unable to travel because my husband will be out of town at the same time so I need to stay with my daughter. I'm so appreciative of his support!”

Thank you for sharing your stories with us! At MongoDB, we care deeply about being a diverse and inclusive workplace for all of our employees. From our parental leave policy, to employee resource groups and other important benefits, we want to make sure that every employee is supported in their professional and personal lives.

Interested in learning more about our job opportunities? Explore our open positions and join us!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Which came first, the database or the data? When you are starting with a new cluster on MongoDB Atlas, the database is just a couple of clicks to create, but unless you already have data you can upload, that database can sit empty until you learn how to import data or build an application to fill it. We know this happens often to people who create MongoDB Atlas M0 "free tier" clusters so we looked for a way to help people learn faster.

What we came up with was the new “Load Sample Data” feature. It has just been added to Atlas and enables you to quickly load six datasets into your database instance ready for you to explore. In all, there’s 350MB of data, ready for you to index, query or aggregate using any of MongoDB’s tools such as MongoDB Charts or MongoDB Compass. It’s all there to help you master the power of MongoDB.

On creating a new cluster you may be invited to automatically load sample data into your database. If not, then the “Load Sample Data” button can be found on the Clusters view of MongoDB Atlas under the ellipsis … button in the information panel.

It’ll then ask you if you are sure you want to do that and go off loading the sample data in the background. Give it a few minutes while it loads; if you try and view collections, you’ll see this:

When the loading is done, diving into your collections view will bring up a list for the six databases and their associated collections.

And don't forget MongoDB Compass Community, your desktop companion for exploring these datasets, is freely available so you can start building aggregation pipelines to turn the mass of data into insightful results. Find out how to install it for your desktop or laptop system in the MongoDB Compass documentation.

With this set of data to hand — and with the tutorials and MongoDB University courses free and ready for you — we hope you'll have an even smoother learning experience with MongoDB.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

With just five weeks to go before we all get together at MongoDB World 2019, the planning and organizing is going full steam ahead. This week, the Hackathon winners have been announced, the weekly challenges are ongoing, and we have the answer to the question "When do you turn up for MongoDB World?"

The Hackathon Winners

Yes, the MongoDB World 2019 Hackathon has announced the winners and the three winning teams will be on their way to MongoDB World to present their creations so the grand prize winner can be chosen. There’s a mind-controlled wheelchair prototype with MongoDB Mobile recording events, a coffee delivery system which tracks stocks of coffee with MongoDB and MongoDB Stitch, and a trading strategies automator with a MongoDB Atlas backend. You’ll see them all at MongoDB World. If you still want a chance to code to win then don’t forget that...

The Challenges Continue

Eliot’s Weekly MongoDB World Challenge continues this Wednesday and every Wednesday up until MongoDB World. In the last challenge, participants used Charts to dig into a dataset through visualization - see how it’s done in the solution. Catch the next challenge live as it’s announced on the MongoDB Instagram account. This week's challenge is going to be all about Stitch Triggers so it will be a great opportunity to learn and win.

World Class Tips

We got a question from one attendee wondering what time they should arrive on the Monday of MongoDB World 2019 and did they need to sign up for sessions in advance. Well, we can help there. Sessions will be starting at 8am, Monday morning. Allow time for registration before that. If you want to glide into sessions at 8am, why not register on Sunday at the early badge pickup that runs between 5pm and 7pm? There’s no need to sign up for sessions in advance, but we recommend that you make yourself a schedule to get from session to session quickly.

And here's another world class tip. Register for MongoDB World 2019 by next Friday (May 24th) using the code BUILD and you'll snag yourself a $149 pass into the conference. See you there!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In Eliot’s MongoDB World Weekly Challenge - Week Two, you were set a task which involved using MongoDB Charts to answer some questions around where to send and house MongoDB engineers. Charts is a great way to visualize data, so even if you didn't do the challenge (there's one every Wednesday up to MongoDB World), you can still be a winner because we're going to show you how to get the solution.

Question 1 - The Property Count Question

Remember we want our Engineers to stay in the most popular, populated market. To that end, use MongoDB Charts to analyze the data to determine which market (address.market field) has the largest number of properties listed in the sample dataset. Create a chart showing your analysis and provide a link to this chart using the embed code feature of Charts.

Well, for this and most of the other questions, we’ll use a horizontal bar graph. If we drag the address.market field over to the Y axis and set it to sort by value, we’ll bucket the results up into the separate markets. Now we want to count the number of properties; well each document represents a property, so let’s count the number of documents by counting the field that’s definitely going to be in the document, _id. Drag that field over to the X axis in Charts and set the aggregate function to count and you should see this.

And we can see Istanbul topping the charts. This is the correct answer; in this sample data set, Istanbul has the most properties.

Question 2 - Treehouses or Castles?

MongoDB Engineers typically like to stay in either Tree Houses or Castles. On average, does it cost more to stay in a treehouse or a castle? To solve this, you will probably want to use the mean value of the property_type and price fields in your chart.

This should be fairly simple. For our X axis, we’ll drag over our price field and set the aggregation function to Mean. Then, we’ll drag the property_type over to the Y axis and sort that by Category. That gives us this:

But that isn’t the answer to the question. If you hover over the chart in the right places you can find the answer, but that’s not a good visualization. We are only interested in Treehouses and Castles, so let’s filter the data using a MongoDB query in the Filters field. Specifically:

{ property_type: { $in: [ 'Treehouse', 'Castle'] } }

With this filter in place, things are a lot easier to read.

And we can see that Treehouses, with a mean price of 185 cost more than Castles with a mean price of 127.

Question 3 - Amiable Amenities?

We also like to ensure that our Engineers stay where they have the most amenities. In which market (address.market field) do properties have the highest average number of amenities (amenities)?

The first part of this goes back to the first question; we want to analyze by market so we drag address.market field over to the Y axis and set it to sort by value. Next we need the number of amenities for each property. Well, there’s no field with the number as amenities are in an array, but we can count the number of elements in the array. Drag amenities onto the X axis and you are offered “Array Reductions”, ways to extract data from the array. Select Array Length and then select the aggregate function Mean. This should give us this chart:

And we can conclude that Maui is the place with the highest average number of amenities to keep the engineers occupied.

Question 4 - Review Season

Of the properties with at least one review (number_of_reviews), what month and year had the highest number of first reviews (first_review field)?

For a change, let’s use a grouped Column view this time. This is another question where a filter helps; just a simple one:

{ number_of_reviews: { $gt: 0 } } 

This will pick up only properties where the number of reviews is greater than one. Now we can use the first_review field. Drag that to the X axis, and because we want this data by the month, turn binning on and set it to Monthly. We are now looking for a count of those first reviews, so let’s count the _id of the documents. Drag the _id field to the Y axis and set it to aggregate by Count. That gives us this chart:

And if we go to the peak of the chart, we see that the answer is July 2018.

Question 5 - Maximum Bedrooms?

We have a lot of Engineers these days and we want to try to get them all in the same home. Help us out and find out what is the maximum number of bedrooms (bedrooms field) of any property in the dataset?’

This is notionally a simple question. We’re interested in which property has the most bedrooms. Start with a bar chart and drag the _id field to the Y axis, sorting by value, and drag the bedrooms field to the X axis, aggregating it by Sum (or Max, it doesn’t matter, we’re dealing with one property each time). You’ll get a chart like this:

Well, there’s a result there. You can see that this outlier has 20 bedrooms. You can only deduce that from the axis though. There must be a better way to do this chart and there is. Flip the X axis’s aggregate to Max, we are going to look for the maximum of a category instead of individual properties. What category? One we used previously; property_type is suitably granular and should give up some extra insight. Drag the property_type field to the Y axis and now we have this chart.

We can hover over the leading bar and see the maximum bedrooms in the Boutique Hotel category is 20. Same answer but now with a much more interesting and useful visualization.

Displaying the Results

Most of the details for displaying the results in the Challenge are covered in the challenge instructions. The HTML page you would create would, at its simplest look something like this.

<html>
  <head>
    <title>
      Solving Charts
    </title>
  </head>
  <body>
    <h1>Solving Charts</h1>
    <h2>Q1</h2>
    <iframe style="border: none;border-radius: 2px;box-shadow: 0 2px 10px 0 rgba(70, 76, 79, .2);" width="640"
      height="480" src="https://charts.mongodb.com/charts-sampledata-kedqw/embed/charts?id=aa829192-a2f2-44a6-8140-f61d5be88a6d&tenant=b293159e-d17f-4ed9-bc83-70dcbdd8e7b6"></iframe>
    <h2>Q2</h2>
    <iframe style="border: none;border-radius: 2px;box-shadow: 0 2px 10px 0 rgba(70, 76, 79, .2);" width="640" height="480" src="https://charts.mongodb.com/charts-sampledata-kedqw/embed/charts?id=64aaf6e4-0a8a-4afb-a3a3-64ed64d11b1c&tenant=b293159e-d17f-4ed9-bc83-70dcbdd8e7b6"></iframe>
    <h2>Q3</h2>
    <iframe style="border: none;border-radius: 2px;box-shadow: 0 2px 10px 0 rgba(70, 76, 79, .2);" width="640" height="480" src="https://charts.mongodb.com/charts-sampledata-kedqw/embed/charts?id=fe50fb2b-5fd3-4b8e-99f4-c94c3398feef&tenant=b293159e-d17f-4ed9-bc83-70dcbdd8e7b6"></iframe>
    <h2>Q4</h2>
    <iframe style="border: none;border-radius: 2px;box-shadow: 0 2px 10px 0 rgba(70, 76, 79, .2);" width="640" height="480" src="https://charts.mongodb.com/charts-sampledata-kedqw/embed/charts?id=72cf805f-6946-4828-9d39-b26dddf564e5&tenant=b293159e-d17f-4ed9-bc83-70dcbdd8e7b6"></iframe>
    <h2>Q5</h2>
    <iframe style="border: none;border-radius: 2px;box-shadow: 0 2px 10px 0 rgba(70, 76, 79, .2);" width="640" height="480" src="https://charts.mongodb.com/charts-sampledata-kedqw/embed/charts?id=0c651118-7a8c-4547-a769-ee53708ca007&tenant=b293159e-d17f-4ed9-bc83-70dcbdd8e7b6"></iframe>
  </body>
</html>

The <iframe> data is simply copy and pasted from the MongoDB Charts application.

Wrapping Up

That's it for the solution to the Charts Challenge. As you can see, MongoDB Charts is a great way to explore your data visually. The next challenge in Eliot's Weekly MongoDB World Challenge is coming on Wednesday 15th of May and every Wednesday up to MongoDB World. Join in, up your MongoDB skills and you could win a prize.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

We have been blown away by the creativity of all the projects submitted for the MongoDB World Hackathon. In total 75 teams submitted projects on a wide range of topics: healthcare, games, machine learning, IoT, and more. You all did a fantastic job!

But we do have to choose the winners. There are over $42,000 worth of prizes to be won. The top 3 teams will be flown out to MongoDB World in New York City. They will present their projects on stage and the first place winner will be chosen by an audience vote - winning $10,000! Without further ado, here are the winners:

Top 3: Brainhack Wheelchair, Kegomate, and Porygon

Brainhack Wheelchair is a prototype of a wheelchair controlled by an EEG sensor. The result is a wheelchair which is usable by people who are unable to use conventional wheelchair interfaces due to severe motor disabilities. The wheelchair prototype uses a Raspberry Pi, and the mobile app uses MongoDB Mobile to store the information streamed from the EEG sensor.

BrainHack Wheelchair Demo - MongoDB World Hackathon - YouTube

Kegomate is the brainchild of developers at 540.co which describes itself as the small business "that the Federal Government turns to in order to #GetShitDone". Kegomate collects data from a flow sensor to track coffee consumption and informs the office admin via a slackbot when coffee is running low. The 540 team used a Raspberry Pi, two Arduinos, and MongoDB Stitch to ensure they never ran out of coffee again.

Kegomate - YouTube

Porygon is a tool to help build, backtest, and automate trading strategies. Porygon will level the playing field and make trading more accessible to everyone. You can create trading strategies without code and then test your strategies without spending real money. The beautiful frontend is built using React and the backend is built using hapi.js and MongoDB Atlas.

Porygon MongoDB Hackathon Video - YouTube

We’re excited to have teams from all over the world come to New York City to show off these amazing creations! Congratulations all!

Other Prizes

Here’s a full list of the winners for the other prizes:

Rock My Map wins the prize for the best use of MongoDB Charts. This website captures rock history by mapping band tours. You can search for any band and year and Rock My Map will plot all of the shows during that year. It will also show you the tours that happened during that year. Rock My Map uses embedded MongoDB Charts to visualize some stats about the data set of tours.

Rock My Map - YouTube

Person8 wins the prize for Social Good for their app that helps homeless youth restore and maintain their identity. It keeps a copy of their identity documents and other key information, hosted securely online so a missing phone won't become a crisis. The homeless will have access to their own records and have control over who can see it.

Person8 demo - YouTube
More chances to win with Eliot’s Weekly MongoDB World Challenge

If you didn’t win a prize in the hackathon, don’t fret. We still have more chances to win prizes leading up to MongoDB World. Eliot’s Weekly MongoDB World Challenge is an exciting series of challenges coming to you from MongoDB’s very own Co-Founder & CTO, Eliot Horowitz.

Each week Eliot or a member of the engineering or product management team will introduce a new challenge that will have you exploring, investigating and discovering the world of MongoDB and data.

The challenges will increase slightly in difficulty and - to make things interesting - we’ll have a prize ladder valued starting at $1,000 and continuing to increase each week until it’s $6,000 by the 6th week.

You enter the challenge here. Keep tuned on our Instagram for the weekly announcements.

Join us at MongoDB World

We'd love to see you at MongoDB World on June 17-19. It's a fantastic opportunity to meet like-minded developers and level up your MongoDB skills. You’ll also be able to see the final showdown between the top 3 teams for the $10,000 grand prize.

Register today using the code HACK50 for 50% off MongoDB World tickets.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The MongoDB Atlas platform is constantly evolving and the Atlas Mapped series keeps you up to date with what's launched in Atlas over the past few weeks. In this edition, we look at the latest in private network peering options available on Azure and Google Cloud Platform.

Network peering for Atlas customers on Azure and GCP

Virtual networks are one element of how cloud resources ensure they are operating on an isolated network. They have different names on the various cloud platforms. AWS or GCP customers will know these virtual networks as Virtual Private Clouds or VPCs; on Azure, they’re simply called Virtual Networks or VNets.

By being able to link virtual networks, an infrastructure architect can bring isolated resources together to create their own secure cloud.

In MongoDB Atlas, database clusters are grouped into projects, with each project getting its own virtual network. And it is through those virtual networks that you can attach your application servers using Network peering.

Network peering allows you to establish a network connection between the network containing your application instances with the Atlas virtual network containing your managed MongoDB databases, enabling you to route traffic between them using private IP addresses. In other words, it allows your application instances to communicate with Atlas clusters as if they are within the same network. This capability has been available for customers deploying Atlas on AWS, including cross-region VPC peering, and is now available for customers deploying on both Azure and GCP.

  • On Google Cloud, a single VPC can span multiple regions without communicating across the public internet. This means you don’t need cross-region support and connections in every region.
  • On Azure, each cloud region within an Atlas project gets its own VNet. So while Atlas does support cross-region VNet peering to connect databases to application servers in another region, your Atlas cluster must reside in a single cloud region.

As a reminder, peering is only available for clusters M10 and larger. The smaller shared M0, M2, and M5 database clusters do not support network peering. To learn more about setting up a peering connection with MongoDB Atlas, visit our docs.

New Atlas regions

We’re excited to report that customers can now deploy Atlas in Johannesburg on Azure and in Zurich on GCP. This marks the first time Atlas is available in those respective countries and regions. And while it's not a new geographical region of the world, we are very pleased to report that with the arrival of AWS support for Hong Kong, that Hong Kong is now supported on all three cloud platforms. For those keeping count, this brings the total number of cloud regions supported by Atlas to 63 (across all Azure, GCP, and AWS), the most of any database service.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

At MongoDB, we help our customers transform their ideas into reality. Our global Customer Success Managers are an integral part of a customer’s MongoDB journey and success in achieving ambitious goals. They are trusted advisors and the “go-to” people for best practices and advice.

MongoDB’s “CSM Series” explores the growth of our Customer Success Managers and the unique opportunities they help our customers explore, as well as the innovative techniques they use to build and shape our program.

In this first post, you will meet Taylor Francis, who was looking for her next challenge when she joined MongoDB.

Taylor Francis, Customer Success Manager, Austin

I joined the Customer Success team at MongoDB because I wanted to help solve critical business problems that are central to innovation for customers and their companies. I was looking for a disruptive technology company where I would be challenged and could make a real impact on business outcomes. My ideal company would provide a fast-paced environment, a customer-centric culture, and an incredible product—MongoDB was all three. After interviewing with the team at MongoDB, I knew that this was a place where I could take my career to the next level.

My first month at MongoDB as a Customer Success Manager

After joining the team, I quickly learned that MongoDB invests as much in its employees as it does in its customers. From day one, I was enrolled in a Buddy Program that focused heavily on technical training and helped me prepare for a week-long intensive technical and sales bootcamp in New York City. Shortly after bootcamp, I attended our annual Sales Kickoff in Las Vegas, which was full of actionable training sessions.

During eight weeks of intensive training, I attended and participated in my first Customer Success Quarterly Business Reviews (QBRs). Our QBRs are unique because each Customer Success Manager, or CSM, gets to teach our team about something they specialize in since we all become subject matter experts on MongoDB products. Our QBRs echo the entire company’s embrace of the core value, “Build Together”. We continuously share our learnings, and on a global scale, I feel like I’m personally connecting with and learning from employees around the world.

Owning your book of business and developing an expertise

Mirroring the nature of the Customer Success industry, our team is expanding rapidly and evolving as our customers grow. Our team is unique in that we embrace and capitalize on the opportunity to develop technical areas of expertise and share this knowledge with the wider team. The role of a CSM at MongoDB is very consultative — you own your book of business and need to understand how to implement strategy, analyze customer health, and make autonomous decisions based on that data. We are working with tenured Chief Technology Officers who are reliant on us to be the knowledge center for what is considered best practice and how they can stay ahead of the curve in terms of innovation. That’s no light task and definitely delivers on the daily push and career challenge that I wanted.

As a CSM, my day-to-day varies widely which is what I love so much about the role. When I am not actively participating on and preparing for customer calls, I am doing project management for my customers, coordinating with our Technical Services Engineers and Professional Services team, aligning with Regional Sales Directors to make sure that everyone is on pace, and remaining aware of risk mitigation and growth opportunities.

I also include days of “deep work,” which are reserved for thinking about scaling and enablement techniques for my team. For example, I am currently working on a new view of the customer journey with custom Account Enablement plans and sharing these strategies with the global team. What I love most about Customer Success is that as a rapidly evolving industry, there is a unique opportunity to impact the development of the entire customer lifecycle.

Taking our company values to heart

One of the biggest learnings since joining MongoDB is the team’s embodiment of our core company value “Think Big, Go Far.” Our leadership is focused on pushing the personal growth of each and every individual on the team. They empower me to drive my own career growth, focus on improving my skill set, and proactively identify my performance gaps. The self-determination and support from my team to strategically improve each step along the journey for the customer is what I love the most about my job. MongoDB is a data-driven company and as a Customer Success team, we make intelligent, evidence-based decisions to help improve every touchpoint we have with our customers as we build impactful relationships.

To top it all off, I have the privilege of working alongside some of the brightest people in the industry who genuinely care about each other’s success.

Interested in pursuing a CSM role at MongoDB? We have several open roles on our CSM teams in Austin, New York City, and Dublin, and would love for you to build your career with us!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In a previous post, I discussed some of the methods that can be used to “lock down” the schema in your MongoDB documents. In this second part of this series, I’ll continue on with techniques beyond simple required field and value validation. We’ll explore how to further benefit from MongoDB’s document model and see how to apply another validation technique to arrays.

Checking Your Arrays

We dealt with checking an array's structural properties in the last post, but this time we want to deal with an aspect of the data they contain. An array holds a selection of items. Sometimes, well usually, these are all different, but sometimes there may be duplicates of the same item. If we want to ensure that they are all different, then we need to ensure the contents of the array are unique within the array.

Imagine if your culinary endeavors take you to a food coloring company. Your product is selling sets of a variety of colors of food coloring. It would probably be a great idea to make sure that each box of food coloring includes different colors. How exciting would it be for a customer to only get a box of blue food coloring? We can use the uniqueItems keyword in our schema validator to ensure uniqueness.

db.foodColor.drop()
db.createCollection ( "foodColor",
{
    validator:
    {
        $jsonSchema:
      {
        bsonType: "object",
        required: ["name", "box_size", "dyes"],
        properties:
        {
            _id: {},
            name: {
                bsonType: ["string"],
                description: "'name' is a required string"
            },
            box_size: {
                enum: [3, 4, 6],
                description: "'box_size' must be one of the values listed and is required"
            },
            dyes: {
                bsonType: ["array"],
                minItems: 1, // each box of food color must have at least one color
                uniqueItems: true,
                additionalProperties: false,
                items: {
                    bsonType: ["object"],
                    required: ["size", "color"],
                    additionalProperties: false,
                    description: "'items' must contain the stated fields.",
                    properties: {
                        size: {
                          enum: ["small", "medium", "large"],
                          description: "'size' is required and can only be one of the given enum values"
                                },
                        color: {
                          bsonType: "string",
                          description: "'color' is a required field of type string"
                                }
                    }
                }
            }
        }
      }
    }
})

Our packages of food coloring, as defined above, must contain unique containers of dyes. That uniqueness is judged by the combination of size and color.

Document 1

We can of course have a box with three different sizes of different colors.

db.foodColor.insertOne({name: "Rainbow RGB", box_size: 3,
dyes: [
        {size: "small", color: "red"},
        {size: "medium", color: "green"},
        {size: "large", color: "blue"}]}) // works

Each item is unique. So this works.

Document 2

We could have a package with three different sizes of the same color, but the combination of color and package size must be unique in the dyes array. That means that when a document is inserted like this:

db.foodColor.insertOne({name: "Singinꞌ the Blues", box_size: 3,
dyes: [
        {size: "small", color: "blue"},
        {size: "medium", color: "blue"},
        {size: "large", color: "blue"}]}) // works

It is valid because each item in the array is a unique color and size combination. Let’s try another one:

Document 3

Let's fill a box with red coloring in various sizes:

db.foodColor.insertOne({name: "Reds", box_size: 6,
dyes: [
        {size: "small", color: "red"},
        {size: "medium", color: "red"},
        {size: "large", color: "red"},
        {size: "small", color: "scarlet"},
        {size: "small", color: "brick red"},
        {size: "small", color: "red"}
]}) // doesn't work, there are two small red dyes in this box

We see here that due to there being two small red containers in this package, the insert fails.

Document 4

What if someone tries to create a "special edition" of coloring with an aroma or taste to make the boxes more interesting? The validation schema helps there too.

db.foodColor.insertOne({name: "Specials", box_size: 3,
dyes: [
        {size: "small", color: "red", aroma:"malty"},
        {size: "medium", color: "red", aroma:"fruity"},
        {size: "large", color: "red",taste:"salty"},
]}) // doesn't work, there are extra properties

Being able to validate schema shape and values is a valuable and powerful tool. We can extend our schema validation process by defining a schema based on specific properties.

Conclusion

JSON schema validation can greatly enhance your application and add security to your system. In this particular case, we've used validation to ensure that there are no duplicates in embedded arrays in a document, entirely through defining the schema and with no additional code. We've also protected against having unauthorized extensions to the specification of array objects. For our food coloring example, the validation schema can't solve the variety problem, but it can prevent some of the worst failures possible at a database level.

The techniques provided both here and in the previous post, are wonderful tools to have in your toolbox when working with the MongoDB document model. In part three of this series, I’ll explore schema dependencies and show how to make fields dependent on the existence of others.

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview