EA has demonstrated how 'next-gen' hair will look when using the company's Frostbite graphics engine.
The impressive demo shows a creepy mannequin walking and the hair reacting naturally to both movement and light changes. Such detail on each strand is something to behold.
Frostbite Hair Montage - YouTube
EA's technical achievement is the result of dedicating a small team to the task at its DICE and Criterion studios.
Back in 2012, Disney released its hit animation movie Brave. While many will have watched unaware, the curly hair of lead character Merida was a groundbreaking achievement.
Disney's old hair simulator wasn't up to the task, so a new one was created named Taz in 2009.
Taz forms individual coils around computer-generated cylinders of varying lengths and diameters which stretch out when Merida runs, but snap back into place after. Each strand is also strung through with a flexible 'core curve' which allows the coils to bounce and brush against one another without unwinding.
Just like Taz was to Disney's animations, EA's next-gen Frostbite engine looks to be a literal gamechanger for in-game hair graphics.
Frostbite has been having a rough time as of late. BioWare, in particular, cited the engine as being the reason for inadequate performance in Anthem and Mass Effect Andromeda.
EA uses Frostbite for the majority of its releases, but some recent games (e.g. Apex Legends) and upcoming titles (e.g. Star Wars: Jedi Fallen Order) have opted to avoid it.
Hopefully, the next-gen Frostbite engine performs more consistently. We look forward to seeing what game developers can do with it.
Apple has begun sending out invites for WWDC 2019 so here’s your customary rundown of what to expect from the keynote.
The date, time, and location of Apple’s keynote has been set for June 3rd at 10:00 am PT in the McEnery Convention Center in San Jose, California.
Often the invite has subtle hints of what to expect so fans around the world spend hours analysing it. We’ve attempted it for you, and the only thing we’ve gathered so far is Apple plans to launch a happy app-running unicorn which spews diamonds.
Take that, people who say Apple can no longer innovate.
Software: iOS 13, macOS 10.15, and the others
Siphoning through rumours, leaks, and people’s hopes and dreams for Apple’s increasing number of platforms is a task itself. Some are likely, while others are more fantastical (well, I would have said that before the iUnicorn).
We’re going to focus on the former category; the more probable inclusions.
iOS 13 will almost certainly have a dark mode. Users want it, apps subjectively look better, it’s easier on the eyes, and it reduces battery consumption (especially on those OLED panels Apple has started using!) Oh, and Android Q is getting it.
There are rumours floating around that a redesigned home screen will make its debut in iOS 13. Such murmurs have been going around for years, so we’d not be surprised if this doesn’t come to fruition although it would be welcome.
Apple has a conundrum on its hands with the iOS home screen; it’s iconic and has the simplicity that users welcome, but it’s getting tired. For iPhone, the longstanding interface works fairly well. On iPad, however, its lack of space usage is a running joke. Of course, if you’re changing one interface then it must be consistent with the other.
The iPad looks set to receive some love. Perhaps the most important rumoured addition for iPad is mouse support. This seems highly likely, given Apple’s intention to make the iPad a laptop replacement. Developers will also have far more options when it comes to apps and games requiring greater precision than what touch affords.
Elsewhere in iOS, we’re expecting four new Animoji (god help us), 200 new emojis (seriously, help), and some overdue updates to Apple’s first-party apps.
Developers specifically can expect Apple to open its portcullis slightly further to improve Siri and NFC support for third-parties. Apple’s so-called ‘Marzipan’ framework should allow iOS and macOS apps to finally share a common codebase.
One thing to keep an eye on is what iDevices will be reaching their end of life.
Apple hasn’t completely forgotten about the Mac and debuted a MacBook Pro refresh earlier this week. Now we’re looking to WWDC to see what Apple has planned for the software-side.
Rumours have been quiet on the macOS front. Aside from rumblings that iTunes will be split into separate apps for music, podcasts, and TV, we’ve only got one mildly interesting possible feature being talked about which is the ability to use an iPad as a second display.
As for watchOS and tvOS, we don’t have much for you. We’ll likely see more Apple Watch integrations with gym equipment as part of GymKit, and some more channels for TV.
Given the aforementioned MacBook Pro refresh and iPad updates in March, we’re not expecting much on the hardware-front. iPad Pros now typically follow the iPhone fall release cycle. That said, we’ll likely see at least some bits of hardware announced.
A modular Mac Pro has been rumoured since 2017, and Apple confirmed last year that a refresh will arrive in 2019. WWDC seems a good time for a debut. There’s some discussion that a 31.6-inch 6K display will even grace the stage alongside it.
The other piece of hardware that has the potential for a keynote showing is a new HomePod addition. This could be a cheaper device to grab more of a market dominated by Amazon and Google, one featuring a screen like those competitors, or perhaps even both.
Over 5,000 developers are expected to hit San Jose for Apple’s keynote, and we’ll keep you posted with any rumours or announcements.
Huawei’s in-house smartphone OS is due to be released this year and Chinese media reports it will even support Android apps.
The OS has been in development since 2012 following a US investigation into Huawei and ZTE, but recent events are sure to have sped up the company’s plans.
Richard Yu, Huawei Consumer Business CEO, said the OS is "designed for the next generation of technology" and "will be available in the fall of this year and at the latest next spring."
There’s been little need for Huawei to use an OS other than Android. Huawei is now the world’s second most popular smartphone brand after overtaking Apple and is even threatening Samsung’s top spot. However, it’s prudent to have a backup plan whenever relying on a third-party.
Earlier this week, Google reported it would stop providing Android updates and prevent future access to its services for Huawei devices. The decision was made after the US Commerce Department added Huawei to its ‘Entity List’ which means any dealings require prior approval from the government.
Bryan Ma, VP of Client Devices Research at IDC Asia-Pacific, said of the decision:
"As far as overseas markets go, this move just turned Huawei’s upcoming phones into paperweights. The phones won’t be very useful anymore without Google apps on them, and other apps will be unable to call on Google Play services."
Huawei has since been granted a temporary US general licence and Google has resumed its partnership, but the situation is sure to have spooked the firm.
"Google’s restriction of business ties with Huawei could obliterate demand for the Chinese company’s devices overseas and give market leader Samsung a leg up in cementing its lead in Android devices," reported South China Morning Post.
The temporary relief gives Huawei time to work on its replacement OS. Details are sparse, but Yu said it will be “open to mobile phones, computers, tablets, TVs, cars and smart wearable devices," and "compatible with all Android applications and all web applications."
Yu claims if an Android app is recompiled for the new OS, running performance is improved by more than 60 percent.
Huawei founder Ren Zhengfei told a Chinese state media broadcast on Tuesday the US ban will not impact company plans with rivals "at least two to three years behind." He also said, "the current practice of US politicians underestimates our strength."
Developer will be following the situation and learning what it can about Huawei’s new OS. However, it’s of some comfort to know Huawei has a backup plan and Android developers will not completely lose distribution to customers of the world’s second largest smartphone manufacturer.
DevOps engineers would love to measure the performance and availability of every page of a website or application; they could make sure the page is up and running and even look to see if response time and performance continues to remain strong and consistent. It’s easy enough to do technically, just attach a Synthetic User Monitor to each page and then see the results in an easy to use data presentation tool.
The snag is that it can be expensive! Synthetic User Monitoring tools usually come as part of a large Application Performance Monitoring package. Sometimes engineers can get a few pages for free but nothing substantial. That’s where if you can find an open source tool the costs will be low, and you can get the basics done without a high recurring cost.
The Synthetic User Monitoring (SUM) market is one of the largest and fastest growing subsegments of the IT Operations Management (ITOM) market. Preliminary forecast revenue of approximately $2.1 billion by 2021 and a growth rate exceeding 17% CAGR.
Synthetic monitoring is valuable because it allows a DevOps engineer to identify problems and determine if a website or application is slow or experiencing downtime before that problem affects actual end-users or customers. This type of monitoring does not require actual traffic, so it enables companies to test applications 24x7 or before a live customer-facing launch.
Before easy to use open source tools became available, SUM was only used to monitor commonly trafficked paths and critical business processes. It was not thought to be feasible to measure performance for every page.
Reports from leading Gartner analysts suggest that operations and infrastructure leaders struggle with monitoring cost and complexity, and remain at a loss when asked to provide a business justification for monitoring efforts. Open source SUM is easy to justify -- if the application is down or slow -- it loses value.
Analysts also suggest that price continues to be a nontrivial roadblock to broader adoption of monitoring across all critical applications in an enterprise. SUM is often priced as part of an APM bundle and becomes more costly than it should be.
What to look for in an Open Source Synthetic User Monitoring tool:
Provide real-time monitoring of website availability and performance from a simulated end-user perspective
Provides the ability to know when there are major problems with a website or web application
Ability to dig into site performance with waterfall charts and test results with detailed downtime and error reports
Ensures that important transactions are taking place error-free with customized testing
Allows choice of frequency and location of tests based on business function and need
Provide alerting whenever site or features are unavailable, or performance is degraded so that site owners can react quickly.
Who should consider Open Source Synthetic User Monitoring:
All companies want to maintain high availability for their extensive set of cloud applications. SUM can help with that when it is used by engineers on the front lines. An additional benefit is that easily supplied SUM data can also be used to break down information silos. DevOps can provide much-needed performance data to dev teams that can help improve the performance of the next set of application deployments
Your website influences your user's overall perception of your brand and your product. So non-responsive apps, 404 errors, expired or invalid SSL certificates, or worse yet, the "Site Not Found" error all negatively affect your user's opinion of your brand.
Firefox has been updated with a range of significant performance improvements, but developers will be most interested in WebRender’s rollout and the updated AV1 decoder.
WebRender is Mozilla’s next-gen GPU-based 2D rendering engine. The technology aims to make browsing feel smoother by moving core graphics rendering processes to the GPU.
Previous iterations of Firefox’s browser rendering pipeline varied dependent on platform and OS, which had two core drawbacks:
Dependent on the variation, rendering was performed on the CPU which consumes resources valuable for other tasks.
Maintaining a multitude of backends was inefficient and costly.
In a blog post, Mozilla Engineering Manager Jessie Bonisteel wrote:
“A single backend that we control means bringing hardware acceleration to more of our users: we run the same code across Windows, Mac, Linux, and Android, and we’re much better equipped to work around driver bugs and avoid blacklisting. It also moves GPU work out of the content process which will let us have stricter sandboxing in the content process.
We’ve seen significant performance improvements on many websites already, but we’ve only scratched the surface of what’s possible with this architecture. Expect to see even more performance improvements as we begin to take full advantage of our architectural investment in WebRender.”
WebRender is going live in today’s Firefox update, but initially for users running Windows 10 on desktop machines with NVIDIA graphics cards (around 4% of Firefox’s desktop population.)
On May 27th, 25% of the qualified population will have WebRender enabled. That will increase to 50% by Thursday, May 30th if everything is going smoothly. WebRender will then be enabled for 100% of the qualified population by the following week.
WebRender is not the only significant feature in the latest Firefox version. An updated decoder for AV1 – the new royalty-free video format jointly developed by Mozilla, Google, Microsoft, Amazon and others as part of the Alliance for Open Media – is being launched.
The new decoder is called dav1d and replaces the reference decoder shipped in January’s release of Firefox. Mozilla has witnessed substantial growth in the use of AV1 – with their latest figures showing 11.8% of video playback in Firefox Beta used the new format, up from 0.85% in February and 3% in March.
Elsewhere in the release, Mozilla is adding: Enhanced data protections such as blocking ‘fingerprinting’ and crypto-mining, improved accessibility features, and increased speed through better resource management and the suspension of idle tabs.
The latest version of Firefox Quantum is available here.
President Trump’s trade war on China has resulted in Google, Qualcomm, Intel, Broadcom, and Xilinx ending their relationship with Huawei.
Trump signed an executive order effectively banning telecoms equipment from foreign firms deemed a security risk. Shortly after, the US Commerce Department added Huawei to its Entity List. Any US firm wanting to have dealings with a company on the list requires prior approval from the government.
Google was the first to announce the relationship termination. Huawei will lose access to future Android updates as well as Google’s services, but existing devices will continue having access to Google Play and Google Play Protect.
Charlie Dai, a principal analyst at Forrester, said:
“This move will have a critical impact on Huawei’s business around smartphones. Huawei has its own mobile OS as a backup, but it’s not fully ready yet and it’s very difficult to build up the ecosystem as what Huawei has been doing on Android.
It’s no good for consumers around the world, and it’s a pity that customer value – facilitated by the open-source spirit – is now ruined by politics.”
Intel, Qualcomm, Broadcom, and Xilinx have all told employees they will no longer supply Huawei.
Qualcomm supplies chipsets and modems for Huawei’s budget smartphones. Huawei’s more premium devices use Kirin processors developed in-house.
Intel, meanwhile, supplies processors for Huawei’s laptops and servers. This decision extends the impact on Huawei’s consumer division beyond smartphones.
The consumer business of Huawei became its most abundant source of revenue last year; driven by well-received devices such as the P30 Pro. Here’s a breakdown of Huawei’s revenue sources from its 2018 annual report:
Huawei is the world’s second largest smartphone manufacturer; overtaking Apple in Q1 2019 as its volumes increased by nearly 50% year-on-year. The company’s telecoms infrastructure business has often come under fire by the US over national security concerns, but its consumer division had been relatively unscathed.
Shobhit Srivastava, a research analyst at Counterpoint, said last month:
“Huawei became the second largest smartphone brand by shipments without a significant presence in an important market like the United States. It was also the fastest growing brand among the top 10. At this pace, we expect Huawei to remain ahead of Apple at the end of 2019.
What has helped Huawei is the pace of its innovations. It was the first to introduce features like reverse wireless charging, onboard AI, advanced camera, and more. A dual-brand (HONOR) strategy has helped Huawei build a connection to younger profile consumers and gain additional market share in a sluggish Chinese market.
Huawei is now a match for Samsung in smartphone hardware. Like Samsung and Apple, Huawei also is becoming increasingly vertically integrated. We believe it is Huawei that Samsung should be worrying about rather than Apple.”
As noted by Srivastava, the US is not a big market for Huawei. Most smartphones in the US are purchased using carrier subsidies, but operators have been unwilling to partner with Huawei over threats of making them ineligible for government contracts.
AT&T was due to announce a carrier partnership with Huawei during CES 2018, but the carrier famously pulled out last second. CEO Richard Yu used the stage to express his frustration with the US market.
On Huawei's smartphones being distributed by US carriers, Yu said on-stage: “Unfortunately at this time we cannot. It’s a big loss for consumers because they don’t have the best choice for devices.”
“We’ve won the trust of the Chinese carriers,” Yu told the crowd. “We’ve also won spots on all of the European carriers.”
The decisions of Google, Intel, Qualcomm, Broadcom, and Xilinx will be devastating for Huawei’s consumer business. With the company also facing a fierce battle with its telecoms business, it’s hard to see how Huawei can overcome this unless there's a resolution to the US-China trade war.
Excluding a player like Huawei will hit the Android ecosystem and its developers. However, a loss for Android is a win for iOS.
In a contender for the decade’s most surprising partnership, Sony and Microsoft have agreed to work together on cloud and AI technologies.
AI and cloud are increasingly vital technologies, and Sony isn’t exactly considered a leader in either. It’s easy to see what Sony is getting out of the partnership, but less so what Microsoft sees in the collaboration.
Microsoft and Sony are each other’s biggest competitor in the gaming space. Gaming is shifting towards cloud streaming, as shown by Google Stadia and Microsoft’s own xCloud, and Sony was expected to struggle in this area.
Sony was among the first to launch a game streaming platform with PlayStation Now, but it suffered from poor quality and high latency. Unlike Microsoft and Google, major cloud players, the company doesn’t have such a large global infrastructure to rely on.
“The two companies will explore joint development of future cloud solutions in Microsoft Azure to support their respective game and content-streaming services,” Microsoft said in a statement.
As part of the agreement, Sony will use Microsoft’s cloud solution Azure. PlayStation’s existing game and content-streaming services will use Azure in the future.
Microsoft will gain some profit from this, but if the company wanted to be ruthless, it probably could have shut Sony out of the market and contended with Google’s emergence. Microsoft has established franchises, studios, and partners in the gaming market. Google is yet to prove itself, which could have put Xbox in a substantial lead.
Sony arguably has the strongest, most recognisable franchises in the gaming industry. There’s a possibility this partnership is to combine those franchises, with Microsoft’s technology, in a bid to fend off competition from Google and Amazon.
Kenichiro Yoshida, President and CEO of Sony, said:
“PlayStation itself came about through the integration of creativity and technology. Our mission is to seamlessly evolve this platform as one that continues to deliver the best and most immersive entertainment experiences, together with a cloud environment that ensures the best possible experience, anytime, anywhere.
For many years, Microsoft has been a key business partner for us, though of course the two companies have also been competing in some areas. I believe that our joint development of future cloud solutions will contribute greatly to the advancement of interactive content.”
Earlier this year, Microsoft’s gaming boss Phil Spencer promised Xbox will ‘go big’ at this year’s E3 conference which Sony will be absent from. A lot more will be revealed about the company’s gaming strategy then, so we’ll keep you posted with any updates.
Elsewhere, Sony and Microsoft will also be partnering on AI and semiconductors. This will include the potential joint development of new intelligent image sensor solutions.
The companies aim to provide enhanced capabilities for enterprise customers through integrating Sony’s image sensors with Microsoft’s Azure AI technology in a hybrid manner across cloud and edge.
Both parties also plan to incorporate Microsoft’s advanced AI platform and tools in Sony consumer products, to ‘provide highly intuitive and user-friendly AI experiences.’
“Sony has always been a leader in both entertainment and technology, and the collaboration we announced today builds on this history of innovation,” comments Microsoft CEO Satya Nadella. “Our partnership brings the power of Azure and Azure AI to Sony to deliver new gaming and entertainment experiences for customers.”
One could look at today’s web and mobile applications and feel that the pace of change has slowed a bit. On the surface, they may not have dramatically changed. However, this surface similarity masks huge shifts happening underneath. Applications are becoming more dependent on an interconnection of services from cloud builds to cloud functions, from first-party microservices to third-party AI services, from improving identity and authentication services, and much more.
This interdependence and reliance on services and the growing complexity of serving audiences across devices underpin many of the trends that are impacting professional developers in 2019. What can developers expect and how can they continue to maximise their existing skillsets and build apps that stand out from the crowd? Let’s look at five of the key trends that are driving the industry forward:
Accessibility, in web, mobile and desktop application development, is not new. What’s new is a sudden interest from the broader development community now caring about it beyond just those sites mandated by law to be accessible. Some of this is being driven by companies that are wary of some relatively recent litigation around web site and mobile app accessibility. On the other side, it’s also being driven by what appears to be a growing concern over inclusion within the influential members of the developer community. This isn’t a change that will likely occur quickly – with most companies’ timelines and budgets always constrained, accessibility is often considered a “nice to have” as opposed to a requirement. However, I think 2019 will be a pivotal year in a change in attitudes.
Progressive Web Apps (PWAs)
You may be thinking, “I’m pretty sure Progressive Web Apps (PWAs) were trending last year.” If so, you’re right – and they are still trending in 2019. Why? There are a number of factors. One of the most important is the growing support for PWA features in iOS. The lack of iOS support was a drawback to early PWA adoption that was somewhat mitigated by the “progressive” nature of the technology (i.e. they worked on iOS, but as though they were a standard web page). Another factor is that large and influential companies, like Microsoft and Google, are continuing to push the initiative – even adding features like Google Play Store and Windows Store support. Finally, large brands including Instagram, Twitter, Uber and Starbucks have released high profile PWAs.
Today’s applications, whether they be on the web, mobile or desktop, typically rely on a myriad of APIs to send and receive the data that makes them function. However, the limitations of REST-based APIs has meant that in many cases, apps were overfetching (getting more data than they need because that’s all the API supports) or underfetching (having to make multiple API calls for one set of data). Enter GraphQL, an open source data language for APIs originally developed by Facebook, but now part of the Linux Foundation. GraphQL solves these problems by allowing the API developer to describe their data and provide the application developer to query the endpoint and get all the data they need, but also only what they need.
Serverless is another item that has arguably been trending for years. However, as more companies move more of their infrastructure to the cloud and start building new applications or rearchitecting old ones to rely on a microservices architecture, serverless will continue to trend. It is not uncommon nowadays to hear about applications that are a mash-up of services run across a variety of cloud vendors and services that meet specific application needs. This has even led to new application development models such as the JAMstack, which takes the classic static site, using the power of serverless functions called asynchronously, and makes it fully dynamic without the need for a database and without the potential security risk of an application server platform.
Today’s enterprises are demanding more applications with more features from their existing development teams. Low code development platforms, sometimes referred to as high-productivity development platforms, aim to help developers address these growing needs by providing platforms that simplify the creation of common application requirements. Though the exact definition of “low code” varies widely, most of these tools rely on visual programming interfaces combined with some degree of code generation to accelerate anything from the initial app prototypes to even a finished application. Expect enterprises to continue to increase adoption of these tools to try to meet their application needs.
Twitter has opened the doors to its Developer Labs where developers can test new API products from the social network before release.
Initially, Twitter is planning to focus on conversational data. This will be for academics and researchers who study interactions on the social network, as well as social listening and analytics providers that build products for other businesses.
Twitter says developers can provide feedback on what they like and dislike to help build ‘the next generation of the Twitter API’.
While the social network has added enterprise data APIs and the Ads API, the main API has been stuck on version 1.1 since August 2012. We all know how that update went down (hint: not very well) after implementing things such as rate and token limits which killed off many third-party clients.
Ian Cairns, Group Product Manager at Twitter, wrote in a post:
“Twitter has significantly changed since we introduced the v1.1 API in 2012, as has the way developers use the Twitter API. Going forward, we want to make it easier for more developers to get started and grow with us while continuing to provide a useful, open and free API offering.
We’re building the future of our developer platform with a diverse range of developers in mind.”
Twitter has long promised to rebuild its relationship with developers but, as of yet, we’ve seen very few moves to achieve that. The company says it wants to simplify its services for developers, make them easier to use, and offer more features.
While such noises are welcome, we’ll have to wait and see whether it’s just noise or whether it’s backed with positive action. Opening up the Developer Lab offers Twitter a chance to rebuild the bridges it's burnt time and time again.
“We know we still need to do a better job listening to and learning from our developer community. Labs is a test program to help us do that, in line with the more open, public approach to product development we’re taking across the company.
We’re inviting interested developers to preview new features and tell us what they like, what they don’t, and what would help them—before we launch new API features broadly to everyone.”
Anyone with a developer account can sign up to Developer Labs here to receive updates when the first endpoints go live in the coming weeks. Documentation can also be found here.
Further updates on the program will be shared via @TwitterDev. A #TapIntoTwitter event will be hosted in New York on June 4th where Twitter will meet with its developer community and share more details.
Developer sat down with Unite UX product manager Daniel Levy to hear Progress’ plan to bridge the gap between developers and designers.
Most of us have witnessed or experienced the disconnect. The different tools used often results in a back-and-forth with constant reiterations needed. Ultimately, this wastes effort and money with a slow time-to-market.
Unite UX aims to offer a roundtrip workflow allowing developers and designers to continue using their existing tools.
“Unite UX allows designers to collaborate better with developers through a series of plugins and libraries that we supply with UX such as Sketch and Adobe XD,” explains Levy. “A plugin that we’ve created allows the export of the designs into a MetaBridge format which is then consumed by Unite UX Studio to instantly transfer that design into a fully-coded Angular or React application.”
Any changes suggested by developers can be imported back into the design tool with a focus on minimising version challenges. This more efficient collaboration promises to provide more consistency (and less frustration!) across design and development to ensure apps get to market as fast as possible, but ensuring the UX doesn’t suffer.
“We’re not recreating the wheel when it comes to design tools, we’re working with existing design tools which is huge for UX designers because they love their existing tools,” says Levy. “We keep them in the environment they’re super-comfortable in.”
“On the developer side, we are bringing that design – pixel perfect – right over into a development tool so the developers don’t have to struggle with writing the CSS and HTML to try and interpret those designs to a tee, and this saves so many iterations.”
While Unite UX is launching with support for Angular and React, other frameworks will be added over time. Currently in the pipeline is support for Vue.JS and Blazor.
Unite UX seems like a groundbreaking achievement where the workflow just makes more sense and everything prior looks instantly archaic. We asked Levy why he believes a solution like Unite UX hasn’t been created before, not just by Progress.
“The round-trip experience is very challenging, but the time is right – these pains are really starting to surface over the last couple of years and being in the UI space for the last 15 years puts us in a very strong position to be able to address this problem in a rightful manner and in a way that developers will appreciate, not just the UX designers.”
Progress has indeed been in the space for some time, and Levy has been with the company since 2010. We took the opportunity to get his thoughts on what makes a good UX.
“You have to listen to the customer, you have to understand what they’re trying to achieve but you don’t do exactly what the customer is asking,” says Levy. “You take it, you research it, you watch what they do and then iterate and put different designs in front of them and see which ones were more intuitive to them.”
“But, if they’re banging their head on the table – or if they’re struggling with anything – you catch that and you try to identify what they’re struggling with an iterate around that to make it a better overall experience.”
Our full interview with Levy can be viewed below:
Daniel Levy, Unite UX: Uniting developers and designers | ProgressNEXT 2019 - YouTube
You can join the waitlist for access to UniteUX here.