Follow Semiconductor Engineering on Feedspot

Continue with Google
Continue with Facebook


TSMC posted mixed results for the second quarter. It also presented a mixed outlook for the third quarter, according to various reports.

TSMC and Samsung are in the midst of a foundry battle at 7nm and 5nm. “TSMC raised its 2019 capex outlook to over $11B, up from prior guidance of $10B-$11B. The increased capex is to support 5nm and 7nm ramps, with accelerated 5G investment and strong customer demand noted as drivers,” according to a research note from KeyBanc. “TSMC indicated that it is accelerating its 7nm and 5nm ramps due to high customer demand, although we believe it’s also due to Samsung aggressively courting customers at 7nm. This bodes well for semiconductor equipment companies, which should benefit from any arms race between the two foundries.”

Chinese smartphone maker Xiaomi has taken a 6% stake in chip designer VeriSilicon, according to various reports.

Toshiba Memory Holdings will change its name to Kioxia Holdings on Oct. 1. The name Kioxia (kee-ox-ee-uh) is a combination of the Japanese word kioku meaning “memory” and the Greek word “axia” meaning “value.”

Fab tools
ASML posted mixed results for the quarter. “Our second-quarter sales came in within guidance and the gross margin came in above guidance, helped by improved EUV manufacturing results and higher field upgrade sales, which more than compensates the negative mix effect in comparison with Q1,” said ASML President and Chief Executive Peter Wennink. “For the remainder of the year we see further weakness in memory, while logic looks stronger. We expect that the increased demand in logic will compensate for the decreased demand in memory. The additional growth in logic is driven by accelerated investments in 7nm nodes and beyond.”

ASML will license its EUV pellicle assembly technology to Mitsui Chemicals. Mitsui Chemicals will assemble and sell pellicles in high-volume to lithography customers. In parallel, ASML will continue to develop next-generation pellicle membranes with its partners.

Entegris has acquired MPD Chemicals, a provider of advanced materials to the specialty chemical, technology, and life sciences industries.

Market research
SEMI announced a memorandum of understanding with FIRST Global, a non-profit that organizes the FIRST Global Challenge, a international robotics event dedicated to inspiring a passion for STEM (Science, Technology, Engineering and Math) among young students worldwide.

Strategy Analytics predicts that electric vehicles will become a greater proportion of the global vehicle mix. As a result, the demand for power electronic components will account for over 55% of the total semiconductor demand for powertrains by 2026, according to the firm.

The post Week In Review: Manufacturing, Test appeared first on Semiconductor Engineering.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Steven Woo, Rambus fellow and distinguished inventor, talks about the amount of power required to move store data and to move it out of memory to where processing is done. This can include changes to memory, but it also can include rethinking compute architectures from the ground up to achieve up to 1 million times better performance in highly specialized systems.

In-Memory And Near-Memory Compute - YouTube

Find more memory videos here

The post In Memory And Near-Memory Compute appeared first on Semiconductor Engineering.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Semiconductor Engineering sat down to discuss partitioning with Raymond Nijssen, vice president of system engineering at Achronix; Andy Ladd, CEO at Baum; Dave Kelf, chief marketing officer at Breker; Rod Metcalfe, product management group director in the Digital & Signoff Group at Cadence; Mark Olen, product marketing group manager at Mentor, a Siemens Business; Tom Anderson, technical marketing consultant at OneSpin; and Drew Wingard, CTO at Sonics [Sonics was acquired by Facebook in March 2019]. What follows are excerpts of that discussion. To view part one, click here. To view part two, click here.

SE: When it comes to complex systems in which all the players are different, how do you know there won’t be a scenario in which multiple system components hit a deadlock?

Metcalfe: That’s a really important point because that happens on a system-on-chip level also. So many times I hear people say, ‘What we need on this system-on-chip is a three Gigahertz CPU. That will solve all our problems,’ but as soon as you have a 3 GHz CPU, the problem often transfers somewhere else. If you can’t get data on and off that CPU, it’s going to be idling lots of the time; maybe a GPU architecture would be better. This fixation on high speed CPUs is only one part of the partitioning problem. You can make one part of the partition very, very quick, but if nothing else keeps up with it, it’s not going to do much so partitioning, relatively speaking against all the different partitions is equally important. You don’t want one thing doing something very fast and everything else not able to keep up.

SE: In a heterogeneous situation, how is that different from a non-heterogeneous situation in terms of the actual partitioning that needs to be done and the test benches that are needed? How do you approach that as an architect?

Nijssen: It’s totally different in the sense that it’s just so much more complex. Let’s say you could make do with one processor, one operator can do it all, then I need to know that one, I need to know it really well, but then I can reason about my system, how it’s going to behave, but if every player on the field will respond in a different way, that makes it for every one that you add, it’s not additive, it’s multiplicative; it gets so multidimensional and it just makes it really difficult to reason about performance, or to reason about the integrity of the system, the robustness, and quite frankly, the correctness of the system.

SE: So then, what do you do about that?

Wingard: The earlier SoCs were all heterogeneous, and they were heterogeneous by necessity. It wasn’t that people didn’t know that they can put down a set of processors to attack the problem; it wasn’t an option. The die would have been too big; it wouldn’t have been able to achieve the cost or whatever. Some of our earliest chips were the original digital TV chips and that transition had an enormous amount of energy processing elements. I had to be able to do MPEG and I had to be able to put out pretty pictures. I had to have a display control and each one of those had their own image processing pipelines associated with them and they had to be stitched together some way and because it was video, the data sets were too big to keep on chip and so they all had to beat the crap out of memory. They had all those problems. Did they have elegant solutions? Heck no, but what architects are good at is abstraction and so you have to come up with sufficiently conservative assumptions about the behavior. If you do worst case assumptions, most of these chips don’t work. So the architect has to do sufficiently conservative assumptions about what the actual behavior is going to be and they provision their system based upon those. And then yes, they come back and take a look at it at the end. But none of these people had the luxury of having software that ran on these chips before they were back in the lab; that software wasn’t ready for a long time afterwards. Were there surprises? Of course there were surprises but the best architects figured out what the best way was to be concerned with. And I don’t think that fundamental model changes much, but now we’ve got hierarchies of systems of subsystems of [subsystems], and of course it gets more complex. What I think we lacked then, and lack now, is a semi formal way of describing these interactions. So I would like to have the equivalent of static timing analysis for performance. I would like to have static performance analysis. I don’t think the math is that hard. I think it’s getting the models out of the creators of the different components, some of which will be IP vendors, and some of which would be a person on your design team who can actually describe, ‘I work okay if I’m going to generate traffic that looks a bit like this and as long as I get responses that look something like that, then I’m okay.’ And then you could automatically validate that that’s true when you build the blocks and then I could build a system level performance model that would be formally correct.

Ladd: Would this be transactional based queuing models that you could use for your performance?

Wingard: The problem with queuing models is they don’t have — because of the performance characteristics of dynamic RAM is address pattern dependent and so you get very different behaviors, not just based upon the number of transactions, but what addresses they use. If the three of us are all trying to generate addresses that all pound on the same bank of memory, the performance is going to be lousy.

Ladd: Couldn’t you add that congestion into the queuing model?

Wingard: You could, if you could understand it, but again, you’d have to get in there.

Kelf: This is where the Portable Stimulus fits in. Portable Stimulus has been billed as the portability of emulation and simulation (https://www.accellera.org/activities/working-groups/portable-stimulus) stuff, and that’s all fine, but it’s actually rubbish. What it is, is trying to supply what you just said, I think: Almost a semi-formal way of describing real high powered SoC scenarios at a level that you can drive test, including queuing but not just to think of it in terms of queuing model, think of it in terms of a full scenario model that handles the control and the data at the same time looking a lot at memory interaction, addresses, cache coherency, and all these kinds of things at the SoC level, which allows you to really wring out the SOC by driving C tests and transactions without having to have an operating system running on that thing, or even bare metal real software tests. You can think of it as a substitute — almost — for the real software running on this thing by providing scenario models that generate software tests, and hardware tests at the same time and wring out the SoC in a full sort of model. That’s what we’re trying to do.

Olen: It’s called ‘Portable Stimulus’ but it’s probably one of the most misleading labels because what it actually is is a declarative specification of behavior, from which clever tools can generate stimulus, but it is not stimulus itself. It’s actually a BNF (Backus–Naur form)-based, semi formal description of behavior, and it actually allows partitioning because you could go off and you could write your Arm core description and you could go off and write your PCI Express transaction generator and I can write a USB interface. We could all partition our verification challenge and test all of those things, and then we could bring them all together at the SoC level, and then you could write a high level C code generator at the top that controlled all of this, so none of us has to multithread.

Nijssen: One of the difficulties of individual partitions is that their behavior is becoming more dynamic these days. So the cache example is a very good one. The behavior of timing and performance of this block depends on what was running on different CPUs; it’s workload dependent. Now in my verification, do I have to have integration of all possible interactions of these programs that might be running on a GPU, CPU, FPGA, whatever, and that might be stepping on the same cache line or just basically one mixed hold cache dirty and affects the performance of the other ones where it had certain QoS guarantees to deliver. The problem with that is that the partitioning itself presupposes that I can separate them out, they live in their own world, I can do my thing there, verify them, implement them or whatever, and then put them together and then have a system that’s going to deliver to it’s performances. I think the thing we’re pointing out is that is not true. They interact and they change each other’s behavior or their performance in a very drastic way so the partitioning itself is almost like a misnomer or assess the wrong expectations at least that you can just separate these blocks into different partitions. There’s going to be surprises that if the customer changes their workload. You thought they were going to put this chip in one thing, one way today, and tomorrow they’re going to do it in something else or some other customer comes along or some hacker tries to do a DoS attack — those are all things where people want to have performance guarantees. In datacenters, for example, they want to have 100.0% of Ethernet throughput. No matter what the packet size scenarios are, you have to guarantee 100.0% regardless of what else might be going on. How do you do that? With all these systems whose behavior is now becoming more and more independent?

Anderson: The promise of Portable Stimulus, it’s not the stimulus that’s portable, it’s the model. One of the things I’ve observed as this technology has started to catch on and be used more widely is that the partition people use for the model often doesn’t bear much resemblance to the actual design and that has some advantages. You have a little more flexibility in the way that you’re approaching top level verification and assessing performance and power and all of the other things we’ve talked about. You have to have some model to run to get those answers. You probably could have your architecture C model, which is also probably pretty different than the partition. This is one layer down, able to do more detailed analysis. It’s still not completely tied to the physical partitioning. That’s a level of abstraction that has a lot of value in the Portable Stimulus space.

Kelf: The partitioning of the verification, you do have these individual tests that go with individual blocks, but, when it comes to doing this whole SoC test, you might use a checker on one block to make sure it’s still running right but those SoC tests are very different so now you’re modeling at this SOC level. You can’t really partition that up, you’ve got to think of a scenario that runs across the whole thing and actually what Tom described is almost exactly the sweet spot and where the things are been used the most, which is looking at cache coherency and performance across these partitions. Instead of trying to figure out all the individual tests, and find all the individual corner cases, basically you describe the high level model of the whole thing and then let the tool actually work this model in lots of different ways, find lots of different tests, run it on an emulator so you can run many of them and then try and hit one of those nasty situations.

SE: How does all of this look moving forward, practically speaking?

Nijssen: This is getting so complex. Subsystems within subsystems within subsystems each influencing each other’s behavior. There’s no way you can run emulation on all the different workloads that your customer’s customer’s customer might someday run. The takeaway of this: you’re going to get it wrong no matter if you did everything right so you thought. Maybe this is because it was just not doable; you can’t have this whole universe of interactions going on. So the question is, what do you do? You’ve made your silicon and you can’t wait until you have all possible scenario and combinations that are verified with emulation or other modeling techniques, no matter how strong. You have to release your product, and you’re going to get it wrong. What do you do? The problem is with the partitioning because these effects spillover between partitions. How do you make your system flexible enough that after silicon goes out you will be able to make that change after you learn, after the new workloads become available that weren’t even available when you were designing or even specified in your market requirements document when you got it. The question is how you add that flexibility to your system so you can adapt to this changing environment.

Wingard: As a NoC provider, we’ve had to do that forever. The different scenarios that Dave talks about are something that’s front and center to our customers who have to build multi-mode devices and so we have programmability in our arbitration systems, in our security systems, and things like that, so that you can optimize for modes, but you also can put in margin and decide how you’re going to allocate that margin as the actual software is running on the actual system on the actual board. That’s something we’ve had to worry about a lot.

Metcalfe: Partitioning also has an effect on schedule. We talked earlier about different types of partitioning; from an implementation point of view, designing a chip with 64 of the same thing on it is really different to designing a chip with 64 different things. When the designer makes that partition decision, they’re often saying, ‘I could have this really weird hardware to do this, but if I just do it in a CPU, I can get this done a little quicker.’ That is one of the constraints people face on a daily basis in terms of driving the partitioning decisions.

Wingard: Actually that’s something we find pretty interesting, which is at the top level of these designs, those partitioning choices can be more or less rigid based upon the design flow that the back end team has chosen to use, and we’ve seen examples — because we’re chip people — of, ‘we want everything to be optimized,’ including schedule. As a result, we’ve run into situations where we’ve gone to significant effort to try to build a single on-chip network fabric that can logically span the whole die, but it’s physically going to be partitioned into many different pieces. When we run into a flow that imposes early restrictions where the pin list can’t change, we find a completely different class of results than when we end up with a flow that allows it. I’m fascinated to know, and I imagine you prefer customers to pick flows where flexibility is encouraged because I’ve never seen a compile zero pin partitioning that survived. The existence proof is there that it doesn’t fundamentally work, but still there’s people who seem to want to keep doing it. It kind of surprises me that that would be the preferred route.

Metcalfe: The methodology is very important and that’s very much customer and design specific. But again, coming back to the marketing requirements, you can design anything you want, if you have enough time, but marketing guys are going to say, ‘I need this thing done by the end of the year. What compromise do I need to make to make it happen at the end of the year?’ Well, you’d better not partition that way, because if you partition that way we’re not gonna have it ready by the end of the year. That schedule component comes from forces well outside of the engineering discipline, but it’s equally important in terms of delivery.

Anderson: There’s a lot of stuff that comes in from outside. One thing I wanted to touch on, which has come up a couple times, is re-partitioning. You do your best job, know there’s going to be some iteration, think you’ve got it locked down, then you’re really late in the project and something comes along to make you have to go back and rethink it. Maybe it’s that the competition has a new feature you’ve got to add or the silicon vendor comes back and says they were wrong about some new process. How do you deal with that? What is most likely to screw you up at that stage, and then how do you deal with it?

Wingard: We built our products around generators. My RTL people don’t code RTL, they code Python that generates RTL, and we did that specifically wrapped in the EDA environment because as someone who had integrated big chips before I knew that those things were there. What we measure is how many minutes does it take to get back to where I thought I was yesterday and so we tried to build that technology.

Olen: From the partitioning point of view, what we’ve dealt with — I think all of us — is that system architects tend to focus on design intent and partitioning is largely implementation, not only, but it’s largely implementation. So there’s that bridge between intent and implementation that thankfully for that bridge, it keeps all of us employed.

The post Partitioning Drives Architectural Considerations appeared first on Semiconductor Engineering.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Arm Flexible Access offers system-on-a-chip design teams the capability to try out the company’s semiconductor intellectual property, along with IP from Arm partners, before they commit to licensing IP and to pay only for what they use in production. The new engagement model is expected to prove useful for Internet of Things design projects and for other applications. Rene Haas, president of Arm’s Intellectual Property Group, said in a statement, “By converging unlimited design access with no up-front licensing commitment, we are empowering existing partners and new market players to address new growth opportunities in IoT, machine learning, self-driving cars, and 5G.” The program will take in IP from such partners as AlphaICs, Invecas, and Nordic Semiconductor. Dipti Vachani, senior vice president and general manager, Automotive & IoT, details Arm Flexible Access in this blog post.

The Defense Advanced Research Projects Agency extended the contract for the Posh Open Source Hardware (POSH) program to continue work in analog/mixed-signal verification as part of the second phase of DARPA’s Electronics Resurgence Initiative, in partnership with Lockheed Martin, Synopsys reports. The next phase of ERI builds on the initiative’s existing goals to enforce electronics security and privacy and provide access to differentiated electronics design capabilities to benefit aerospace and defense interests, it was said.

Synopsys is teaming with Ixia, a Keysight Technologies business, for system validation of complex networking system-on-a-chip devices using emulation and a virtual tester. The electronic design automation vendor brought out the ZeBu Virtual Network Tester Solution, integrating the ZeBu emulation system with Ixia’s IxVerify virtual network tester.

Trend Micro says its cloud-based Deep Security as a Service is now available on the Microsoft Azure Marketplace. The offering pairs security software-as-a-service with consolidated cloud billing and usage-based, metered pricing.

Since launching its narrowband IoT network in the U.S. last year, T-Mobile US is now providing asset tracking on the NB-IoT network. T-Mobile for Business collaborated with Roambee to offer the BeeAware asset tracking IoT product, priced at $10 per device per month and including portal access and NB-IoT data. Meanwhile, Slovakia-based Sensoneo is using T-Mobile’s nationwide NB-IoT LTE network and Twilio’s SIM cards to keep tabs on its dumpsters in California, Colorado, and Ohio with the placement of smart sensors in those bins.

Internet of Things
IoT Analytics named the top 10 IoT startups of 2019. They are, in alphabetical order: Arundo Analytics, Bright Machines, Dragos, Element Analytics, FogHorn Systems, Iguazio, IoTium, Preferred Networks, READY Robotics, and SparkCognition.

Sunnyvale, Calif.-based FogHorn partnered with Porsche in Program 6 of Startup Autobahn, a European effort linking automotive manufacturers with promising startups. The program involved a 100-day challenge in developing a prototype of more secure means for drivers to access vehicles by real-time video recognition and multi-factor authentication. FogHorn and other Program 6 participants demonstrated their prototypes this week in Stuttgart, Germany.

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory, Dartmouth College, and the University of Washington worked together on an AI system for a design that will help aerial drones fly more like fixed-wing aircraft and quadcopters. “Our method allows non-experts to design a model, wait a few hours to compute its controller, and walk away with a customized, ready-to-fly drone,” said MIT CSAIL grad student and leader author Jie Xu. “The hope is that a platform like this could make more these more versatile ‘hybrid drones’ much more accessible to everyone.” The research team will present a paper on its work at the Siggraph conference in Los Angeles, set for July 28 through August 1.

WiSig Networks, a developer of products for 5G wireless communications, selected Palma Ceia SemiDesign’s NB-IoT transceiver for an ecological monitoring and management system for use in agricultural applications. WiSig was incubated at the Indian Institute of Technology Hyderabad.

Japan’s SoftBank Corp. validated the Monarch chip from Sequans Communications for use on its NB-IoT network.

Avnet will give away 20,000 Azure Sphere starter kits to IoT developers, helping them to design and deploy highly secure, end-to-end IoT offerings. The Avnet Azure Sphere MT3620 Starter Kit features the Avnet-developed Azure Sphere module based upon Microsoft’s secure Azure Sphere operating system and Azure Sphere Security Service hosted on MediaTek’s MT3620 secure microcontroller. Contest submissions can be made until September 30, 2019, on Avnet’s Hackster.io and element14 communities.

The Federal Election Commission is permitting Area 1 Security of Redwood City, Calif., to provide aid to 2020 presidential candidates, helping them fend off the kind of malicious email attacks Russian hackers employed during the 2016 campaign. FEC attorneys had opposed the proposal, saying it would violate federal campaign finance laws by allowing companies to offer cybersecurity services at a discount. The commission overruled the legal staff, while making its decision applying only to Area 1. FBI Director Christopher A. Wray warned in April that Russian groups are expected to pose a “significant counterintelligence threat” to the 2020 elections.

This week in Huawei – a United Kingdom parliamentary committee recommended that equipment from Huawei Technologies could be installed on the edges of the country’s 5G wireless networks. The Science and Technology Committee in the House of Commons noted that U.K. carriers already use networking systems from Chinese vendors. Meanwhile, bills were introduced this week in the U.S. Senate and House of Representatives that would give Congress control over the Department of Commerce’s “entity list” of foreign companies that aren’t allowed to do business with American suppliers of microchips and other products. “American companies shouldn’t be in the business of selling our enemies the tools they’ll use to spy on Americans,” Senator Tom Cotton, R-Ark. and one of the sponsors, said in a statement. The Wall Street Journal, citing people familiar with the matter, reports that Huawei plans to lay off employees in the U.S. as a result of the company’s blacklisting by the Trump administration. Futurewei Technologies, the Chinese company’s U.S.-based research and development subsidiary, employs about 850 people in the U.S. Hundreds of those workers may be terminated, one source said. Vietnam is, for now, steering clear of Huawei equipment, partly due to the country’s thorny history with China. The ruling Communist Party is trying to steer a path between China and the United States, this analysis notes. Finally, Huawei is criticizing a movement by Italy’s government to have a greater role in the development of 5G cellular communications, which could keep Huawei and ZTE from supplying networking gear in Italy. Luigi De Vecchis, chairman of Huawei Italia, said the Italian government is discriminating against his company.

The National Association of State Chief Information Officers has endorsed the State and Local Government Cybersecurity Act of 2019, a bill introduced in the Senate by Senator Gary Peters, D-Mich., and Rob Portman, R-Ohio. The proposed bill would amend the Homeland Security Act of 2002, encouraging collaboration among federal, state, and local governments on cybersecurity.

Lots of security software claims to incorporate AI technology. Is there actually AI in those packages, or is “AI-washing” going on? Larry Lunetta, vice president of marketing security solutions at HPE Aruba, is skeptical about such claims. “There’s a set of waves that the marketers ride, and AI is certainly one now. The interesting thing is as you go through those phases, the technologies get tougher and tougher to actually execute. And I think part of the disappointment that you alluded to from an AI results perspective, is the fact that you know, when you AI wash, you may have algorithms, you may have actually found some data to train some of the models, but it’s not sufficient to deliver a practical result,” he says in this interview.

The Department of Energy wants to know if blockchain technology could be useful in securing the national power grid. Xage Security last month received a Small Business Innovation Research grant from the DoE to create a blockchain-based security fabric in six months. What Xage comes up with will then be compared with other security offerings.

Symantec reports there is an exploit that can expose media files in Telegram and WhatsApp, which can then be manipulated by malicious actors. The company’s security researchers are calling this “Media File Jacking.” The cybersecurity company notified WhatsApp, now a Facebook subsidiary, and Telegram about the vulnerability.

LexisNexis issued its State of Patient Identity Management report, which includes the results of an online survey of more than 100 professionals in health care. Among the respondents, 58% said they believe the security of their patient portals are above average or superior to the patient portals of other organizations. There were 93% of respondents who say the authentication for their patient portals involves only a username and password. Nearly two-thirds said they have implemented multi-factor authentication.

RedSeal surveyed hundreds of senior IT and security professionals about the tenuous connection between CEOs and their information security teams. While 92% of respondents said their enterprises created specific plans to protect the CEO from cyberattacks and data breaches, 54% of the security personnel said their CEO is ignoring these plans.

As expected, Ford Motor and Volkswagen Group unveiled their plans for corporate cooperation in the development of electric vehicle and autonomous vehicle technology. VW is directly investing $1 billion in Argo.ai, Ford’s AV startup. The German company also will combine its self-driving car unit, valued at $1.6 billion, with Argo, and VW is paying another $500 million to purchase Argo shares from Ford, for a total investment of $3.1 billion. Argo is now valued at more than $7 billion. Ford is investing $1 billion in Argo over five years. Ford and VW will both hold minority equity stakes in the Pittsburgh-based startup. VW agreed to share its MEB EV architecture with Ford for vehicles to be made and sold in Europe, a deal that could yield up to $20 billion in revenue for VW.

A strategic realization is spreading over the automotive industry. The big automakers will have to decide whether to focus on EVs or AVs soon. Investors are placing higher evaluations on AV developers, while the market for EVs remains a niche in the industry. “Wall Street wants everyone to focus on AVs. The really expensive, nearer-term problem to solve is how to make EVs profitable. AVs don’t have to be solved in the next couple of quarters,” says Reilly Brennan, founding general partner at Trucks Venture Capital. AlixPartners, the consulting firm, estimates that $225 billion will be invested in electrification between now and 2023, while AVs have received $85 billion in funding. VW, for example, has committed €30 billion (about $33.7 billion) toward EVs through 2023, with plans for 22 million battery-powered vehicles by 2028. Arun Kumar of AlixPartners predicts it will be eight to 10 years before carmakers start getting a return on EV technology.

Yandex of Russia earlier this year struck a deal with Hyundai Motor to contribute self-driving car tech to Hyundai’s Mobis division. Yandex last week showed off an autonomous Hyundai 2020 Sonata with Yandex’s sensors and software. The new model will go on sale during the fall.

Renault is forming a joint venture with China’s Jiangling Motors, aiming for the EV market in China. The French company will take a 50% stake in the JV. Renault will invest about €128.5 million (more than $144 million) for the equity stake.

French Finance Minister Bruno Le Maire said Wednesday that the Renault Nissan Alliance still has priority with the government of France. “The priority today is to develop an industrial strategy for the Renault-Nissan alliance,” Le Maire said in an interview, adding, “After that, we will have to look at how to consolidate this alliance and it is only on this basis that we will be able to explore future developments.” He denied reports that the French government put the kibosh on the recent merger negotiations between Renault and Fiat Chrysler Automobiles.

Partially autonomous vehicles now roam the highways and streets of America. Realizing the dream of the fully autonomous vehicle may be a decade or so in the future, this analysis notes. While Waymo autonomous shuttles are conveying residents of a Phoenix suburb now, those vehicles never have to contend with blizzards that can obscure the roadway. “We overestimated the arrival of autonomous vehicles,” Ford CEO Jim Hackett said at the Detroit Economic Club in April. “You see all kinds of crazy things on the road, and it turns out they’re not all that infrequent, but you have to be able to handle all of them,” says Argo.ai CEO Bryan Salesky. “With radar and high-resolution cameras and all the computing power we have, we can detect and identify the objects on a street. The hard part is anticipating what they’re going to do next.

Keysight Technologies is addressing automotive cybersecurity through a new program aimed at security professionals among the leading automotive manufacturers and their Tier 1 suppliers. “In today’s vehicles, heavy reliance on connectivity and software improves convenience but increases the potential attack surface for emerging and evolving cyber threats,” Siegfried Gross, vice president and general manager of Keysight Automotive and Energy Solutions, said in a statement. “This new program enables OEMs and Tier 1s to enhance vehicle safety by defining, implementing and deploying a consistent, company-wide approach to the testing of potential vulnerabilities.”

While automation in mobility and transportation will eliminate certain jobs, there will be opportunities for new positions, such as training AI systems for automated machinery, Sudha Jamthe writes for Axios. “As artificial intelligence is integrated into professions ranging from administrative work to customer support, people will need to be trained to work alongside machines — but also on non-technological skills, like customer service and decision making,” she notes.

While carmakers around the world are reducing workforces, Ford’s South Africa subsidiary is adding 1,200 jobs at an assembly plant, an increase of more than 25%, to add an extra shift and to increase production.

Broadcom and Symantec apparently broke off their acquisition negotiations on Monday. The companies were reportedly unable to agree on a price for Symantec shares. SYMC took a hit on the news, falling 10.7% that day to $22.84 a share. The stock has since gained back some of the loss, as investors try to gauge whether a new suitor will emerge, or if Broadcom will return to the negotiating table.

Motorola Solutions acquired WatchGuard; financial terms weren’t revealed. The Allen, Texas-based company provides in-car video systems, body-worn cameras, evidence management systems, and software for the law enforcement market.

MLU, a Russian ride-hailing joint venture of Uber Technologies and Yandex, acquired a competitor, Vezet, for about $204 million, including $71.5 million in cash. MLU is also investing $127 million over three years for driver training, loyalty incentives, and other business purposes.

Autokiniton, a portfolio company of KPS Capital Partners, agreed to acquire Tower International for $31 a share, approximately $900 million in cash. Based in Livonia, Mich., Tower makes automotive structural components and assemblies.

Bedford, Mass.-based Aspen Technology agreed to acquire Mnubo of Montreal, which supplies purpose-built AI and analytics infrastructure for the IoT. The purchase price is C$102 million (nearly $78.2 million). AspenTech last month purchased Sabisu of the U.K. Sabisu will complement Mnubo in providing flexible enterprise visualization and workflow tools for real-time decision support.

8×8 acquired Singapore-based Wavecell for about $125 million in cash and stock. Wavecell offers a worldwide communications platform-as-a-service.


Medallia announced the pricing of its initial public offering, selling 15.5 million shares at $21 per share. The company is selling 14.325 million shares, while selling stockholders are offering 1.175 million shares. In total, the IPO is raising $325.5 million, with just over $222 million going to Medallia, which will trade as MDLA on the New York Stock Exchange.

CloudMinds filed for a $500 million IPO, planning to trade as CMDS on the NYSE. The Chinese company provides an end-to-end cloud robotics system. Citigroup Global Markets is the lead underwriter. CloudMinds has raised more than $300 million in private funding; the SoftBank Vision Fund owns 34.6% of the company prior to the IPO, while Keytone Ventures holds 5.1%.

Salt Lake City-based Health Catalyst set its IPO terms to 6 million shares at $20 to $23. At the midpoint pricing, the provider of health-care data..

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Tools & IP
Arm has a new access and licensing model for its IP. Flexible Access allows SoC design teams to initiate projects before they license IP by paying a yearly fee for immediate access to a broad portfolio of technology, then paying a license fee only when they commit to manufacturing, followed by royalties for each unit shipped. IP available through Arm Flexible Access includes the majority of processors within the  Cortex-A, -R and -M families as well as TrustZone and CryptoCell security IP, select Mali GPUs, system IP, and tools and models for SoC design and early software development.

Cadence debuted a Portable Test and Stimulus Specification 1.0-compliant source code form of the Perspec System Methodology Library and PSS methodology documentation. The PSS methodology library enables Perspec System Verifier customers to access PSS source code for any of the SML functions to develop models. Additionally, the library in source form along with the methodology documentation will be provided to non-Perspec users to help promote the adoption of the PSS. The new PSS methodology and library was checked by AMIQ using its DVT Eclipse IDE to confirm the new library is PSS Language Reference Manual compliant.

Silexica released the latest version of its SLX development tools. SLX FPGA features improvements to prepare and optimize C/C++ code for High Level Synthesis in the Xilinx Vivado HLS design flow, including code transformations for synthesizability, hardware aware parallelism extraction, and a code refactoring wizard. SLX C/C++ features improved multi-process analysis including support for the use of named POSIX semaphores, extended graphical visualization, user-defined thread names, and performance estimation for each processor.

Synopsys and Ixia, a Keysight Business, are teaming up on system validation of complex networking SoCs by integrating Synopsys’ ZeBu emulation system with Ixia’s IxVerify virtual network tester. The combined ZeBu Virtual Network Tester Solution aims to replace traditional in-circuit emulation for networking SoCs and provide a full-featured protocol testing solution for pre- and post-silicon use.

SmartDV announced Verification IP for DisplayPort 2.0. The VIP includes a configurable bus functional model (BFM), protocol monitor and library of integrated protocol checks, and supports all major verification languages and methodologies, including OVM, UVM and SystemC. DisplayPort 2.0 allows for a max payload of 77.37 Gbps and uses the Thunderbolt 3 PHY layer.

Arasan Chip Systems uncorked MIPI D-PHY IP supporting speeds of up to 2.5 Gbps for TSMC 22nm SoC designs. Compliant to MIPI D-PHY Spec v1.2, the IP reuses multiple blocks previous silicon proven 28nm technology to reduce risk while being optimized to leverage the TSMC 22nm technology node for reduction in area and power. It is integrated with the company’s CSI Tx, CSI Rx, DSI Tx and DSI Rx IP.

Alphawave IP debuted its PipeCORE PCIe Gen1-5 PHY available on TSMC’s 7nm process technology. The IP can also demonstrate 64Gbps PAM4 rates for early PCI-Express Gen6 adopters. PipeCORE is highly configurable and available in one, two, four, eight and sixteen lane configurations. It consumes less than 200mW of power at 32Gbps.

A collaboration between UltraSoC, the University of Southampton, the University of Coventry, and cybersecurity specialist consultancy Copper Horse secured £2m ($2.51m) in support from the Innovate UK government agency in developing an on-chip monitoring solution to identify security and safety issues in connected and autonomous vehicles. Along with the monitoring solution, the group will develop a testbed demonstrator representing a full-scale automotive functional architecture and model cybersecurity scenarios to test the robustness of the project’s output.

Innovium adopted Cadence’s Innovus Implementation System for its 16nm TERALYNX 12.8Tbps ethernet switches for data centers. Innovium cited the tool as helping meet PPA goals and improve overall engineering productivity.

Synopsys will working on DARPA’s Posh Open Source Hardware (POSH) program focusing on analog/mixed-signal (AMS) emulation as part of the Electronics Resurgence Initiative (ERI). The company will work with Lockheed Martin in the next phase of the program to provide systems and security expertise.

NSITEXE selected Cadence’s digital design full flow for its high-efficiency, high-quality data flow processor (DFP) IP for automotive and industrial applications. NSITEXE cited a reduction in turnaround time by 75% and improvement in power by 8.5%, performance by 35% and reduced area by 3.5% when compared with its previous competitive solution.

The post Week In Review: Design, Low Power appeared first on Semiconductor Engineering.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

No matter how the ongoing dispute between the United States and China turns out, damage already has been done. It’s not the kind of damage that is immediately visible to the outside world. It’s more of the long-term, policy-shift kind of problem, which over time will likely prove much worse.

Many executives have termed recent sanctions on Huawei and other Chinese companies “China’s Sputnik moment,” because Chinese officials recognize they need to develop their own technology rather than relying on non-Chinese companies. From a high-level perspective that’s a problem, although one that is potentially reversible. But drop it down 30,000 feet, and the view is that Chinese manufacturers don’t want to be dependent upon any single country, because they’ve had a first-hand view that supplies of key parts can evaporate overnight. So they will either support local suppliers, or they will look for suppliers in multiple countries. That’s a much bigger problem, and one that is not likely to be shaken away anytime soon.

Having a reliable supply chain is essential for any business. And if Chinese companies want to compete on a global stage, or even internally, they need all of the necessary parts, equipment and raw materials to make their products. That’s basic business. You can’t make processors without memory, even if you have the best engineers on the planet.

But this also erases some much-needed improvements in supply chain management that were added after the 2001 crash. The big problem in the years leading up to the crash was that demand for parts far exceeded the supply, particularly for communications servers to fuel the dot-com revolution. As a result, manufacturers began double and triple ordering components and tools. When the dot-com market dropped into a free-fall, they were stuck with months of inventory that eventually was dispersed on the gray market.

It took years for the industry to recover from that disaster, and the move to just-in-time manufacturing and better supply chain monitoring had a significant impact on inventory in the 2008 recession. While the 2008 crash turned out to be a deep and lasting economic downturn, semiconductor inventory was not a major issue. As a result, the market recovered much more rapidly than what would have been possible without those changes.

The introduction of a parallel supply chain reverses years of improvements. It’s true that a single source makes it difficult to ensure competitive pricing, some regions have more expertise than others in certain areas. This is why second sources of some products are often located in the same region.

China’s supply chain is relatively opaque, however, and increasingly it will be disconnected from the rest of the global supply chain. So as China begins ramping up inventory, that won’t be obvious to other suppliers. And when there is a glut of inventory, it will likely be dumped into the gray market without warning.

For the past decade, the supply chain has been a finely tuned instrument that is the model for any industry. Over the next decade, that may not be the case. Damage already has been done, and that will become obvious in the years to come.

Related Stories
CEO Outlook: Rising Costs, Chiplets, And A Trade War
Opinions vary on China’s technology independence and its ability to develop key technology internally.
3D NAND Race Faces Huge Tech And Cost Challenges
Shakeout looms as vendors struggle to find ways to add more layers and increase density.
China’s Foundry Biz Takes Big Leap Forward
30 facilities planned, including 10/7nm processes, but trade war and economic factors could slow progress.
SEMI Calls For U.S.-China Tariff Removals
U.S. and Chinese retaliatory tariffs will stifle innovation and cost SEMI members nearly $800 million in annual duties.
China Knowledge Center
Other top stories, blogs, white papers on the China semiconductor industry

The post The Danger Of Twin Supply Chains appeared first on Semiconductor Engineering.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

China is once again making a concerted effort to get its domestic DRAM industry off the ground.

Past efforts have fallen short or failed. This time around, it’s unclear if China will succeed, but the industry should pay close attention here.

So why would China want to play a bigger role in the tough and competitive DRAM business? For one thing, the U.S. and China are in the midst of a trade war. And recently, the U.S. Department of Commerce effectively banned China’s Huawei from buying components from U.S. companies by placing it on the “entity list.”

The U.S. plans to ease the restrictions, but the ban sent shock waves in China. It also prompted China to accelerate its efforts to become more self-sufficient in semiconductor design and production. That includes DRAMs.

This is a long and complicated story. It goes back several years when China launched a number of efforts to become more self-sufficient in semiconductor technology and production. For years, China has relied on foreign suppliers for its chips. And so, the nation has been plagued by a huge trade deficit in semiconductors.

Then, in 2015, China launched a major initiative called “Made in China 2025.” The goal was (and still is) to increase the domestic content of components in 10 key areas—IT, robotics, aerospace, shipping, railways, electric vehicles, power equipment, materials, medicine, and machinery.

As part of those efforts, China continues its march to design and make more of its semiconductors. For memory, though, China’s OEMs still rely 100% on foreign suppliers. Plus, Intel, Samsung and SK Hynix all have memory fabs in China, which sell into the China market.

Several years ago, China announced several major projects to shrink that gap. Among them are:
*Yangtze Memory Technology Co. (YMTC) was set up to develop 3D NAND technology.
*Jinhua Integrated Circuit Co. (JHICC) was formed to develop DRAMs and was building a new fab.
*ChangXin Memory Technology, formerly known as Innotron, was set up to make 22nm DRAMs in a new 300mm fab.

Fast forward. What’s the latest with these efforts? YMTC is supposed to ship its first product this year—a 64-layer 3D NAND device.

JHICC went under last year. In 2018, the U.S. restricted exports of equipment and software to JHICC. Then, UMC, the technology partner for JHICC, pulled out of the project.

Others haven’t thrown in the towel. ChangXin is supposedly readying its first DRAMs.

Then, in the latest announcement, China’s Tsinghua Unigroup plans to enter the DRAM business. Tsinghua is also an investor in YMTC. Actually, Tsinghua’s DRAM ambitions started in 2017, when it announced a $30 billion memory project in Nanjing, the capital of China’s eastern Jiangsu province. The goal was to make DRAMs first, then 3D NAND.

Little has happened since that announcement. Going forward, Tsinghua may look for another location for the DRAM fab, according to TrendForce.

So all told, what are China’s prospects for DRAMs?

1) A long and winding road. China’s domestic memory business has a long way to go before it gets off the ground. It’s far behind its multinational rivals in terms of technology and know-how. It also lacks IP.

2) Money doesn’t buy share. China is pouring billions of dollars into its domestic memory industry. It has little to show for it—yet.

3) Look at Taiwan. In the 1990s, Taiwan made a big effort to enter the DRAM fray. It had backing from Formosa Plastics and TSMC. Today, though, Taiwan’s DRAM players have barely made a dent in the market.

4) Let’s not dismiss China’s memory efforts—yet. It might take several attempts and time to get it right. Even becoming a niche player would be a major accomplishment in the difficult memory business.

Related Stories

China’s Foundry Biz Takes Big Leap Forward

Will China Succeed In Memory?

The post China Latest Goal—More DRAMs appeared first on Semiconductor Engineering.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

After years of acute shortages, 200mm fab capacity is finally loosening up, but the supply/demand picture could soon change with several challenges on the horizon.

200mm fabs are older facilities with more mature processes, although they still churn out a multitude of today’s critical chips, such as analog, MEMS, RF and others. From 2016 to 2018, booming demand for these and other chips caused severe shortages for both 200mm fab capacity and equipment in the market.

In the first half of 2019, the 200mm market cooled off amid a slowdown in the IC industry, trade disputes and other factors. On average, 200mm fab utilization rates hover around 75% to 85% today, compared to more than 90% last year, according to figures from SurplusGlobal, a supplier of secondary equipment. Some but not all foundry vendors have already seen their 200mm businesses bounce back—-they are running at 100% fab utilization rates.

At some point in the future, demand for 200mm fab capacity is expected to rebound, meaning device makers and foundry customers once again will scramble for capacity as in past years. And the industry will face the same problem to meet demand because there is still a shortfall of 200mm fab equipment in the market.

But finding 200mm equipment could become even more difficult over time. Today, some silicon carbide (SiC) device makers are in the early stages of migrating from 150mm (6-inch) to 200mm (8-inch) fabs. As a result, some SiC device makers will also need 200mm equipment, meaning that there will be more IC vendors scrambling to obtain 200mm gear.

Based on the latest forecast, the industry requires more than 1,000 new or refurbished 200mm tools or “cores” to meet current fab demand, according to SurplusGlobal. Today, there are less than 500 available 200mm tools on the market, according to the company. The term core refers to a piece of used equipment that must be refurbished to make it usable.

To make matters worse, many of the available 200mm systems in the market are sub-standard. “Many of those won’t even match current configurations or technology needs,” said Emerald Greig, executive vice president for the Americas & Europe at SurplusGlobal. In comparison, the 300mm market is also slow, but there is an excess of 300mm tools in the arena.

Then, if you find a new 200mm tool, prices are relatively high. “Most OEMs resumed 200mm tool manufacturing and are enjoying this market. The tool prices are similar to the 2005 level. However, the current ASPs are much lower than those of 2015,” added Bruce Kim, chief executive of SurplusGlobal.

Nonetheless, the 200mm market, which includes both fabs and equipment, will remain a viable business for some time. To address current and future 200mm demand, several foundry vendors are building new 200mm capacity. In addition, a number of equipment makers continue to build new 200mm tools for traditional CMOS applications. Some are developing 200mm systems for SiC devices.

200mm trends
The IC market is divided into several segments. At the leading edge, chipmakers are ramping up devices at the 16nm/14nm node and beyond in 300mm fabs. In 300mm fabs, chipmakers also produce chips in more mature processes.

Analog, MEMS, power semiconductors, RF and others are produced with mature processes within 200mm fabs. The mainstream wafer size for SiC devices is 150mm.

So not all chips require advanced nodes. “A lot of the devices that we use today don’t require sophisticated processes,” said Subodh Kulkarni, president and chief executive of CyberOptics. “The chips that need sophisticated processes are the latest GPUs, CPUs and memory. The majority of the world still needs generic semiconductors, and that’s not going to disappear.”

Over time, meanwhile, 200mm has become a big business in the IC industry. The first 200mm fab appeared in 1990, and the wafer size became the standard for years. Then, in the 2000s, chipmakers migrated to more advanced 300mm fabs. By 2007, 200mm reached its peak and the market declined.

In late 2015, the industry saw an unexpected demand for chips made in 200mm fabs. Since then, the 200mm market has surged and become a sweet spot for many devices. Robust demand for chips in automotive, IoT and wireless is expected to drive the production of 200mm wafers by 16% from 2019 to 2022, according to SEMI.

A typical 200mm fab produces about 40,000 wafer starts per month. These plants make wafers at various nodes, ranging from 6 microns to 65nm. In comparison, a leading-edge 300mm fab is ramping up 7nm with 5nm in R&D.

In total, the number of 200mm fabs in production is expected to increase from 188 in 2016 to 208 by 2021 and 209 by 2022, according to Christian Gregor Dieseldorff, an analyst at SEMI. The figures include integrated device manufacturers (IDMs), foundries and epitaxial wafer lines.

Fig. 1: Total number of 200mm fabs Source: SEMI/Semiconductor Engineering

“We currently have twelve 200mm facilities on our radar, including R&D and epi, which will start construction in 2019 or later,” Dieseldorff said. “Five of those are in the U.S., four are in China, two are in Europe, and one is in Taiwan.”

For years, foundry vendors have offered 200mm fab capacity for customers. GlobalFoundries, Samsung, SkyWater, SMIC, TowerJazz, TSMC, UMC and others offer various capacities and processes in the traditional 200mm CMOS-based market.

In recent times, several foundry vendors have expanded their 200mm capacities. But after a period of sizzling demand, the 200mm market slowed in the first part of 2019. Lackluster chip demand and trade disputes contributed to the woes.

Going forward, there are signs of a rebound. “The slowdown of 8-inch demand should be a short-term issue due to the ongoing inventory correction from the consumer and automotive segments. The trade war uncertainty has also clouded our visibility,” said Steven Liu, senior vice president of marketing at UMC. “However, we expect Q2 to be the trough, with some improvement coming in the second half.”

Tracking 200mm fab capacity is complex and involves several variables. First, supply and demand are closely tied to the IC business cycles. Layered on top of that is a migration factor. Some chip types will continue to be produced in 200mm fabs for the foreseeable future. Still other chips are migrating from 200mm to 300mm plants.

“Several applications enjoy a sweet spot in 200mm manufacturing, mainly discretes, MEMS and PMICs,” Liu said. “Some products will benefit from the migration to 300mm wafers, such as MCUs and gradually large- display driver ICs.”

Foundry vendors would like to move as many chips from 200mm to 300mm fabs as possible. Generally, the industry has an excess of 300mm capacity and equipment.

That may play a role in whether a vendor moves ahead with a planned 200mm fab, or builds a 300mm facility instead. “In China, there are still planned 200mm fabs as well as planned 300mm fabs,” SurplusGlobal’s Greig said. “Some companies may decide to bring up a 300mm fab based on tool availability. There is currently an excess supply of 300mm equipment, making this a buyers’ market.”

Where to buy 200mm tools
At some point, meanwhile, IDMs and foundries will expand their 200mm capacities. So where can you buy 200mm equipment?

Chipmakers can buy new or refurbished 200mm gear from equipment makers, used/refurbished equipment companies, brokers, and through online sites such as eBay. Some chipmakers also sell used equipment on the open market.

Not all 200mm tools are alike, however. Some offer new tools, while others refurbish existing ones. There are even cases of firms that sell systems that are sub-standard or simply don’t work.

In all cases, buyers should examine the equipment in person. The tool may still incorporate antiquated components and software. Some will sell the tool “as is” with no service agreement.

One of the first places to look for 200mm gear is from the fab equipment makers. Recently, Applied Materials, KLA, Lam and others have been making new 200mm tools. Generally, the equipment from the original OEMs incorporates the latest components, but they also carry a premium.

The secondary or used equipment companies are another source. Some specialize in one tool type, while others sell a range of systems. A few make their own tools.

Dissecting the market
The equipment market is split into different sectors, such as lithography, deposition, etch and inspection. The equipment in each sector may be targeted for a different market.

Lithography, the art of patterning the tiny feature sizes on wafers, is sometimes the most difficult 200mm tool to procure in the market. “We see some slowdown worldwide. But in general, there is a shortage of 200mm photolithography tools, so demand is still there,” said Stuart Pinkney, vice president of sales for Europe and the U.S. at Nikon, a supplier of lithography equipment. “Lead times on the newer tools are unfortunately stretching. This is more of a function of the product mix situation in our factory.”

Supplies of older 200mm lithography tools are drying up. “Tools that do become available generally need a lot of work. They have may have been harvested by previous owners and are generally not in good shape,” Pinkney said.

Several companies make and sell 200mm deposition and etch equipment. Deposition involves putting thin films on surfaces. Etch tools remove materials.

Many suppliers of these tools continue to build new 200mm systems for several applications. “You have RF devices and power management ICs. These are built traditionally on 200mm-type applications,” said Patrick Martin, business development manager at Applied Materials, a supplier of deposition, etch and other equipment. “What we are trying to address is what we traditionally call a core market. These are core tools that sell in 200mm. It’s a market that we’ve maintained, and it is addressed in a different way.”

Demand remains strong for these systems. “From a total market perspective, the demand for 200mm equipment is certainly outstripping availability,” said David Haynes, senior director of strategic marketing for Lam Research, a provider of deposition and etch systems.

Lam develops 200mm tools for several markets. “Lam has a proactive approach to acquiring 200mm cores for refurbishment and thus we are able to meet customer requirements when others cannot,” Haynes said. “At the same time, Lam has never stopped manufacturing most of our 200mm products and so has maintained the supply chain and infrastructure to build new tools when required.”

Others see demand for different applications. “Our first-half demand was good for 200mm and smaller wafer sizes, notably on wide bandgap semiconductors for high-performance power devices,” said Kevin Crofton, executive vice president and chief operating officer at SPTS/Orbotech. “For the second half, we are seeing a pick-up in all sectors, in particular, RF power amplifiers and filtering for 5G infrastructure, which are 200mm and 150mm wafer activities.”

Recently, KLA completed its previously announced acquisition of SPTS/Orbotech. SPTS offers 200mm deposition and etch tools. “We never stopped building new process equipment for 200mm and smaller wafer sizes,” Crofton said. “MEMS, RF, photonics and power semis are made on silicon and other substrates based on 200mm, 150mm and 100mm.”

Inspection and metrology, meanwhile, also are critical in 200mm fabs. Inspection equipment finds defects in chips, while metrology is used to measure structures.

“We’ve seen a bit of a lull in the first half of the year, but we’re definitely witnessing a bounce back, even from large IDMs, who are expanding their 200mm lines to serve growth in IoT and automotive,” said Wilbert Odisho, vice president and general manager at KLA, a provider of inspection and metrology gear.

“Many equipment companies are finding it difficult buying back tools to refurbish, but a majority of KLA’s systems shipped this year will be newly built relaunched systems,” Odisho said. “Supplementing our refurbished business with relaunched systems reduces our dependency on the availability of old tools.”

Meanwhile, CyberOptics sells measurement devices and sensors, which are subsystems that are integrated into various fab tools at 150mm, 200mm and 300mm wafers sizes. “Semiconductor fabs, whether 200mm or 300mm, need effective tool set-up and maintenance methods that are accurate, precise and fast in order to speed up equipment qualification, minimize costly maintenance downtimes, and ultimately improve yields,” CyberOptics’ Kulkarni said.

What about SiC?
Equipment vendors face some challenges in meeting demand for the traditional 200mm customer base. Now, some SiC device makers are making the transition to 200mm fabs, thereby presenting some new challenges for tool vendors.

SiC, a compound semiconductor material based on silicon and carbon, is used to make specialized power semiconductors for high-voltage applications, such as electric vehicles, power supplies and solar inverters.

The shift toward 200mm fabs is aimed at reducing the cost of SiC-based power semiconductor devices. When moving to a new wafer size, you get 2.2X more die per wafer. A larger wafer size reduces the production costs.

Cree plans to invest up to $1 billion to increase its SiC fab and wafer capacities. As part of the plan, Cree is developing the world’s first 200mm SiC fab.

“Cree will turn its existing North Fab facility into a fully automotive-certified 200mm line,” said Cengiz Balkas, senior vice president and general manager of Wolfspeed, a Cree Company. “The investment leverages an existing building and state-of the-art, mostly refurbished 200mm equipment to build out the production facility.”

Rohm also is working on 200mm SiC fabs. At the earliest, 200mm SiC fabs won’t move into production until 2022, according to IHS, so 150mm will remain the mainstream SiC wafer size for some time.

Nonetheless, as with any wafer size change, both device and equipment makers face some challenges. It may require new types of equipment. “Cree uses highly specialized tools that are custom-designed and built internally and specific to our proprietary processes in our wafer materials business,” Balkas said. “Our device fabrication lines will use industry-standard tools.”

Based on past events, the transition from 150mm to 200mm won’t be easy. Most SiC device vendors struggled to make the recent transition from 100mm to 150mm fabs amid defect issues in the arena.

“Although a few suppliers have demonstrated 200mm SiC wafers with low defect density, there is a shortage of 150mm SiC wafers with low defect density. So the current investments are targeting 150mm SiC wafers,” said Ajit Paranjpe, CTO at Veeco. “Thus, it is unlikely that we will see 200mm SiC in high-volume production in the near future. Once the 150mm SiC wafer shortage has been addressed, a gradual transition to 200mm SiC wafers could occur.”

SiC is a challenging technology. In the production flow, high-purity SiC materials are lowered into a crucible and heated. This, in turn, creates an ingot, which is then pulled and sliced into SiC substrates. Then, a thin epitaxial layer is grown on the substrate using a deposition process. The resulting substrate is then processed in the fab to make a power device.

It’s challenging to make 150mm SiC substrates with low defectivity. The challenges escalate at 200mm. “The bottleneck for 200mm SiC primarily is the wafer growth and SiC epi growth,” Paranjpe said. “The other steps can be performed using existing 200mm processing capability, although fabs may have to invest to upgrade from 150mm to 200mm.”

Today, the SiC industry is working on the 200mm challenges. “200mm SiC production inevitably will become important in driving economies of scale as product volumes ramp,” Lam’s Haynes said. “Today, there are still challenges around wafer availability, cost and defect density, but suppliers and end users are working hard to overcome these barriers.”

Some 200mm tools for traditional CMOS applications can be repurposed to meet the requirements for SiC. “But this often requires process optimization and changes to the wafer transport systems because SiC is transparent to common IR sensing beams,” Haynes said. “In other cases, the unique properties of SiC means that new process capabilities need to be developed at 200mm.”

Others see similar issues. “The equipment we have installed for 100mm and 150mm can be upgraded to 200mm,” SPTS/Orbotech’s Crofton said. “For process equipment vendors who mainly serve the CMOS front-end, 200mm equipment cannot be simply transferred to wide bandgap semiconductors; there are important differences. GaN, for instance, is very sensitive to plasma damage and new processes and hardware need to be developed.”

Inspection tools are also key. Finding defects is difficult on 150mm SiC devices. 200mm will present some new challenges. “We’re starting to see requirements for 200mm-capable SiC systems,” KLA’s Odisho said. “Since SiC requirements are often different from those of standard semi tools, upgrades or new 200mm tools will be required.”

Clearly, 200mm is a vibrant market. It’s not as glamorous as leading-edge processes, but 200mm will remain a good business. The big question is whether vendors can keep up during the boom cycles. That makes it critical is to continue investing during downturns.

Related Stories

200mm Fab Crunch

SiC Demand Growing Faster Than Supply

RF SOI Wars Begin

The post 200mm Cools Off, But Not For Long appeared first on Semiconductor Engineering.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The shift to 5G wireless networks is driving a need for new IC packages and modules in smartphones and other systems, but this move is turning out to be harder than it looks.

For one thing, the IC packages and RF modules for 5G phones are more complex and expensive than today’s devices, and that gap will grow significantly in the second phase of 5G. In addition, 5G devices will require an assortment of new technologies, such as phased-array antennas and antenna-in-package. Testing these antenna arrays remains an issue with 5G.

Today’s wireless networks are based on the 4G standard, which operates from the 450MHz to 3.7GHz frequency bands. In today’s 4G smartphones, the RF components are housed in an RF front-end module, which handles the amplification of the signal and filters out the noise. The antenna, which is used to transmit and receive radio signals, is separate and not bundled in the module.

The big change occurs in fifth-generation wireless networks, or 5G, which is a new wireless technology with faster data rates than 4G. Initially, some carriers are deploying 5G networks at sub-6GHz frequencies. In these 5G smartphones, the RF front-end module architectures resemble today’s 4G phones.

Some telecoms in the U.S. already are deploying a faster version of 5G using millimeter-wave (mmWave) frequencies at 28GHz. The first deployments are mainly limited to fixed-wireless home services. But if or when the service is ready on a wider scale, 5G mmWave phones will consist of new RF front-end module architectures with integrated antennas.

Besides modules, the industry also is developing new IC packages for 5G mmWave. These packages combine an RF chip and the antenna in the same unit, which is called antenna-in-package. The idea behind these new integrated antenna schemes is to bring the RF chips closer to the antenna to boost the signal and minimize the losses in systems.

Chips running at mmWave frequencies and integrated antennas aren’t new, but bringing these and other technologies over to 5G presents some challenges. Unlike 4G, 5G mmWave systems incorporate phased-array antennas, which consist of an array of antennas with individual radiation elements. A phased-array antenna can electrically steer a beam in multiple directions using beamforming techniques.

There are other issues for 5G mmWave. “The rollout is going to take awhile,” said Jan Vardaman, president of TechSearch International. “In 5G, the problem is that it operates at higher frequencies. All of your packages have to be geared to deal with higher frequencies. When people talk about millimeter-wave frequencies, it’s not only the packaging, but it’s the test that goes with it. How do you test at these frequencies? In addition, you will need special things. You will see antenna in package. You will see a boatload of filters.”

Today, several companies are developing IC packages and RF modules for 5G mmWave. Among the developments:

  • Qualcomm recently introduced a mmWave antenna module for 5G phones. Huawei/HiSilicon, MediaTek, Samsung and others also are working on the technology, according to Strategy Analytics.
  • ASE is developing a fan-out technology with antenna in package for 5G mmWave. Amkor, JCET and TSMC are also working on 5G mmWave packages.
  • Companies are also developing mmWave modules for base stations.

Figure 1. BGA with antenna in package for 5G mmWave. Source: ASE

Figure 2. ASE’s fan-out with antenna in package for 5G. Source: ASE

What is 5G?
In 1991, carriers launched a new cellular technology called 2G, followed a decade later by a more advanced version known as 3G. 2G had four frequency bands, while 3G had five. Cellular networks consist of a range of frequencies in the RF spectrum.

Today, wireless networks revolve around the 4G LTE standard, which provides faster data rates. 4G is also more complicated, as it consists of more than 40 frequency bands, plus the 2G and 3G bands.

After 5G is deployed, 4G won’t going away. In fact, it will remain the mainstream wireless technology for some time. By 2024, 4G LTE is expected to have more than 6 billion subscribers, representing over two thirds of all wireless users, according to Strategy Analytics.

5G is emerging and initially will co-exist with 4G, but eventually it will evolve into a standalone network. Compared with 4G, 5G promises to deliver mobile network speeds with a 10X lower latency, 10X higher throughput and a 3X spectrum efficiency improvement. Besides faster mobile broadband, 5G enables faster communications on the manufacturing floor, in businesses, as well as in vehicles.

As of June, 5G has been deployed by 15 mobile network operators worldwide, according to 5G Americas. There will be an additional 47 launches by year’s end, according to the organization.

“The global 5G smartphone rollout so far is slightly faster than expected,” said Neil Mawston, an analyst with Strategy Analytics. “Heavy carrier subsidies in South Korea have seen 5G smartphone sales jump quickly to over 1 million units this quarter. The U.K. and U.S. are surprisingly buoyant, while China has brought forward some of its network launches. However, it is not all rainbows and unicorns. Japan has been surprisingly slow to launch 5G networks and smartphones and is lagging badly.”

On the mmWave front, the United States has approved 28GHz for 5G, while mmWave is moving forward in Europe, South Korea and elsewhere. “We expect 5G mmWave smartphones to emerge in reasonable global volumes from 2020, led by the introduction of Apple’s iPhone 5G,” Mawston said.

The 6GHz version of 5G resembles 4G. 5G mmWave is different and more complex. The 5G infrastructure starts with a core network, which handles mobile voice and data connections.

It also involves a series of base stations and cell towers, which incorporate multiple antennas, sometimes called multiple input, multiple output (MIMO) antennas. In simple terms, the base stations send signals to smaller cell units or smartphones using a technology called beamforming.

There are several challenges here. “Many think mmWave is a technology searching for a problem,” Mawston said. “Millimeter wave has a line-of-sight requirement, low penetration capabilities through walls and a fairly short range. Some say mmWave is more suited to a portable or fixed environment, and not ideal for mobile smartphones. Historically, mobile technologies that struggle to penetrate walls inside buildings like Zigbee have failed to take off in smartphones. Consumers and workers like to move from room to room or office to office and not get disrupted coverage.”

The network is complex in other ways. “Millimeter waves don’t travel very far. We need a much finer mesh so that mobile devices can access the data. The mesh will be the last mile connection,” said Ajit Paranjpe, CTO of Veeco. “The backhaul will bring data to locations near the home. From there, mmWave transmission could connect to the mesh.”

It remains to be seen if 5G mmWave will succeed or fail—or fall somewhere in between. It may work in some areas, but not in others. “The network for the U.S. is going to be a challenge,” said Kim Arnold, executive director at Brewer Science. “When they say that 5G is line-of-sight, that’s going to become a big challenge in less populated areas. We may see it in cities.”

Inside the smartphone
The first 5G smartphones resemble today’s 4G phones. 4G phones incorporate digital chips and RF components. In 4G, the main antenna is separate and runs alongside the phone.

The digital part consists of a modem. The RF components include a separate RF transceiver and an RF front-end module. The transceiver transmits and receives RF signals.

The front-end module consists of several components in the same unit, including power amplifiers, low-noise amplifiers (LNAs), filters, and RF switches. Power amps provide the power for a signal to reach a destination. LNAs amplify a small signal, while filters block out the noise. Switch chips route signals from one component to another.

In the module, the dies are sometimes put in IC packages. Typically, though, they are bare dies that reside on a board.

4G phones also consist of other RF chips, such as Bluetooth and WiFi. 5G smartphones also will use those RF devices. Generally, those devices are housed in packages with integrated antennas, which saves board space.

Today, LG, Samsung and others are rolling out the first 5G phones, which support sub-6GHz, not mmWave. Generally, the initial 5G phones will have a similar RF front-end module architecture as 4G systems.

When the industry rolls out 5G mmWave phones, though, the RF front-end module will change. In 4G, for example, the transceiver is a standalone device. “It’s typically a packaged IC separate from PA, filters, duplexers, switches, related front-end modules,” said Christopher Taylor, an analyst with Strategy Analytics. In 5G the transceiver, along with the antenna, will move inside the module.

But mmWave itself isn’t new. For example, some cars make use of mmWave radar chips operating at 77GHz. Radar chips are used for lane detection and other safety features in cars.

Radar chips are housed in different package types, such as BGA and fan-out. BGA is a common surface mount package. In fan-out, the dies are packaged on the wafer. “In fan-out, chips are embedded inside epoxy molding compound and then high-density redistribution layers (RDLs) and solder balls are fabricated on the wafer surface to produce a reconstituted wafer,” explained Kim Yess, technology director at Brewer Science, in a blog.

There are several packaging approaches for radar chips. “If you look at an automotive radar system, the transceivers and receivers are packaged in a fan-out wafer-level package. In some cases, it’s a flip-chip package. In other cases, it’s a bare die on a board,” TechSearch’s Vardaman said. “The antenna is on the board, but that’s in the car where there is a lot of space. They are like modules.”

But in 5G mmWave phones and other systems there are different requirements, including smaller form factors with integrated antennas in the module or package. The goal is to not only save space, but also to bring the antenna closer to the RF chips.

“Once your signal is up into the mmWave frequency range, you want to keep the signal traces to and from the antenna as short as possible to avoid losses,” Strategy Analytics’ Taylor said. “You also ideally want to have the same parasitics and distance to and from each antenna element in a patch antenna. Otherwise, performance will differ for each element. The question is how do you do this? Qualcomm and others are using multiple die and stacking everything in a package, with short distances between the transceiver and beamforming components, and antenna elements.”

Developing antenna technologies for 5G mmWave is challenging. “A lot of it goes back to what frequency or spectrum you are dealing with,” said Mark Gerber, director of engineering and technical marketing at ASE. “The lower the frequency, it generally requires a larger antenna. The higher the frequency at the mmWave, you are going to have smaller antennas. Because it’s a smaller antenna, it needs to be very precise.”

That’s not the only consideration. “That requires some specialized antenna designs. It’s not just one antenna, but generally multiple antennas. It’s not just one plane, but you need multiple planes,” Gerber said.

5G is poised to dominate the wireless world, but over-the-air (OTA) testing of 5G beamforming antennas is still not ready for volume production. “At first there were a lot of concerns about OTA and how to handle that,” said David Hall, chief marketer at National Instruments. “Lab-based OTA test systems has become fairly widespread, but the current methodologies used in the lab environment do not scale to the cost and speed expectations of the manufacturing floor. As a result, NI continues to investigate both near-field and far-field approaches to OTA testing in preparation for delivering OTA-based manufacturing test solutions in the future.”

At some point, OTA will get resolved. Then, to complicate matters, integrated antennas can be developed in a variety of ways, such as antenna on board and antenna in package, among others.

mmWave packages and modules
In one example, Qualcomm recently introduced a 5G RF front-end module, which includes a mmWave antenna unit. Geared for 5G smartphones sleeker than 8mm thick, the module supports band n258 (24.25 to 27.5GHz) for North America, Europe and Australia on top of bands n257 (26.5 to 29.5GHz), n260 (37 to 40 GHz) and n261 (27.5 to 28.35GHz).

Qualcomm’s product combines the RF front-end module and antenna in the same unit. The module interfaces with Qualcomm’s 5G modem chipset. “Inside the antenna module, there is an antenna array and all front-end components, including PA and LNA,” said Alberto Cicalini, senior director of product management at Qualcomm. “Qualcomm has the antenna on a separate substrate.”

Figure 3. Qualcomm’s mmWave antenna module. Source: Qualcomm

Qualcomm is integrating the antenna within the RF module. The antenna itself is placed on a board, sometimes called antenna-on-board.

The technology solves a major problem. “Millimeter-wave is very hard to use. The path loss is typically a factor of a 100 or more than what you would have in traditional bands for cellular,” said Jim Thompson, vice president of engineering and CTO at Qualcomm, in a recent presentation. “With mmWave, the wavelength is less than a centimeter. So it’s very small and it’s subject to blockage.”

To solve the problem, smartphone OEMs will integrate several mmWave antenna modules inside a phone. “With your hand, you might block one of the antennas,” Thompson said. “So what most of our customers are doing is using three different antennas. They are typically putting them on upper right, upper left and the top of the phone.”

In operation, a 5G base station with a MIMO antenna unit would direct beams in multiple hot spots in a space. The phone would receive and transmit signals via the MIMO unit.

Still to be seen, however, is whether it will work in the field. That’s not the only challenge. “In addition, 5G mmWave systems create significant challenges for packaging engineers, since the power consumption caused by the high data rates at mmWave is coming from the active device. The thermal issues of the interface on the PCB is a very serious challenge to 5G millimeter-wave systems,” said ASE’s Sheng-Chi Hsieh in a recent paper.

In that paper, ASE described a different approach to 5G. It has developed a fan-out packaging technology using an antenna-in-package approach for 28GHz 5G.

Antenna-in-package is different than antenna-on-board, where the antenna is placed on the PCB. With antenna-in-package, the idea is to integrate an RF chip and the antenna in the IC package. The goal is to the shorten the connection between the die and antenna to boost the electrical performance, according to ASE.

In the paper, ASE compared fan-out versus a flip-chip BGA package using various antenna in package schemes for 5G mmWave. In the BGA example, an RF chip (mmWave transceiver) is mounted on the bottom of a substrate. Then, the antenna array is formed on top of the substrate with a through-hole design. In some cases, the industry refers to this as a patch antenna.

There are some challenges with flip-chip BGA using an organic substrate. “The thick substrate is not easy to mount in the thin mobile phone case,” Hsieh said.

For its part, ASE has developed a hybrid fan-out package called Fan-Out Chip On Substrate for 5G mmWave. “Fan-out offers a small form factor, excellent electrical and thermal performance for mmWave antenna-in-package into mobile devices,” Hsieh said.

It appears ASE took a different approach for the antenna-in-package design. The technology involves two separate pieces—a substrate and an antenna module.

The bottom piece is a substrate. An RF chip resides on top of the substrate. The top piece is the antenna module. Then, the antenna module is mounted on top the substrate and connected using copper pillars.

Instead of a conventional antenna patch design, ASE devised a stacking patch. Traditional patch antennas have a narrow band. A stacked patch boosts the bandwidth in the system.

All told, ASE’s 5G fan-out package is less than 0.75mm with three RDLs. It demonstrated better than a 10dB return loss in the 26 to 33GHz range with ~7GHz bandwidth. It provides a high-gain above a ~10.3dBi radiation pattern with a 2×2 patch antenna array.

From a manufacturing point of view, meanwhile, antenna-in-package is a straightforward process. “Conventional bumping processes are typically used for mounting the RFIC chip to the AiP (antenna-in-package) module,” said Warren Flack, vice president of worldwide applications at Veeco. “For some implementations, this may require an additional RDL layer in the RFIC chip for connecting the RFIC chip to the circuit board based AiP module. The RDL line sizes are not challenging for current advanced packaging lithography.”

There are some challenges, however. “5G is going to rely on higher levels of integration for advanced packaging – whether the 5G network utilizes sub6-GHz frequencies or mmWave,” said Stephen Hiebert, senior director of marketing at KLA. “More complex integration drives tighter quality requirements for the various components that are integrated in the SiP. Accurate inspections at the wafer, die and sub-package level are critical for determining the known good components for the SiP schemes being deployed for sub6-GHz 5G. For mmWave 5G, fan-out packaging is being explored as an option for antenna, so additional inline process control using inspection and metrology will be essential to achieving yield requirements.”

Meanwhile, antenna-in-package is moving in other directions. It also may find its way into short-range 77GHz car radar devices.

In a recent paper, Siliconware, now part of the ASE Group, described a flip-chip chip-scale package with an embedded trace substrate (FC-ETS) technology for 77-GHZ radar devices. SPIL also uses antenna in package.

“Compared to conventional antenna-on-PCB board designs, land-side die structures can achieve a shorter path from chip output to antenna input, and reduce the transmission loss of the high-frequency signal,” said Tom Tang from Siliconware in the paper.

Clearly, 5G is finally happening after years of R&D. Even mmWave is ready, at least in limited form. Still to be seen, however, is whether the technology can live up to the hype.

Related Stories

Where 5G Works, And Where It Doesn’t

5G Driving New Automotive Applications

5G OTA Test Not Ready For Production

Edge Complexity To Grow For 5G

5G Heats Up Base Stations

The post Challenges Grow For 5G Packages And Modules appeared first on Semiconductor Engineering.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The Internet of Things (IoT), Big Data and Artificial Intelligence (AI) are driving the need for higher speeds and more power-efficient computing. The industry is responding by bringing new memory technologies to the marketplace. Three new types of memory in particular—MRAM (magnetic random access memory), PCRAM (phase change RAM) and ReRAM (resistive RAM)—are emerging as leading candidates for use in IoT and cloud environments.

All three of these emerging memories are based on delicate new materials that require breakthroughs in process technology and high-volume manufacturing. The critical films are so thin and variation-sensitive that metrology is crucial. The sensitivity of the deposition layers to impurities means that ideally, multiple process steps and metrology should be integrated under vacuum.

MRAM, PCRAM and ReRAM promise to enable higher system performance and lower power than many of today’s designs based on mainstream memories. Already, major semiconductor manufacturers have announced plans to commercialize MRAM and PCRAM. This means progress is being made in engineering complex new materials and depositing them with atomic precision at an industrial scale.

MRAM is formed by precisely depositing at least 30 different metal and insulating layers, each typically being between 1-30 angstroms thick, using physical vapor deposition (PVD) methods. Each layer must be precisely measured and controlled. Magnesium oxide (MgO) film is the core of the magnetic tunnel junction (MTJ), a critical layer that forms the barrier between the free layer and reference layer. It needs to be deposited at 0.1 angstrom precision to repeatedly achieve low area resistance (RA typical range from 5-10Ωµm2) and tunnel magnetoresistance (TMR >150%) characteristics. TMR is a critical parameter that dictates device performance, yield and endurance. Missing atoms can significantly affect TMR (Figure 1), which explains why metrology is so critical in MRAM manufacturing.

Figure 1. Variability of a few atoms in the critical MgO layer impacts performance.

While PCRAM and ReRAM layers are not as thin as MRAM, the materials are highly susceptible to impurities and degradation upon exposure to atmosphere. As with MRAM, this calls for an integrated PVD process system capable of depositing and measuring multiple materials under vacuum to prevent particles and impurities from contaminating the device.

In fabricating these next-generation memories, variability control is critical to achieving repeatable performance for volume manufacturing and commercialization. To achieve <0.3V variability within the wafer threshold voltage (Vt) spread, critical layers in the PCRAM stack must be controlled to within ±5 angstroms of the target thickness, which in turn requires metrology capable of sub-angstrom precision. Traditional characterization approaches for such film stacks rely on a host of standalone metrology techniques and transmission electron microscopy (TEM) that are separated from the process tools, resulting in potential for film degradation.

Figure 2. Traditional metrology approaches have a longer turnaround time and are limited in measuring individual layer thicknesses in a stack.

Most thin films change properties when exposed to atmosphere, so traditional atmospheric metrology relies on thicker blanket films for chamber monitoring, which is not always representative of the material properties of ultra-thin films. Such an approach consumes more deposition material and tool time for chamber qualification.

Although TEM can resolve individual thin layers, as can be seen in Figure 2, the definition of what constitutes an “edge” and precise determination of the layer thickness becomes a problem when the layers are no more than a couple of atoms thick. This situation requires a metrology system that inherently considers the statistical nature of the process. Additionally, longer time to results (hours to days), imprecisely measuring buried film properties and the inability to monitor a full stack on patterned wafers are driving the need for new metrology techniques.

Integrated platforms that operate under vacuum across multiple process steps are needed to minimize queue time effects, avoid film degradation and interface issues. Additionally, they open the door for close loop control of each layer in the stack, thus reducing variability.

For new memories to reach high-volume manufacturing, the industry must enable new process control solutions. Those systems should measure pristine, as-deposited thin films, have a small footprint, and operate quickly and non-invasively.

The post Process Control For Next-Generation Memories appeared first on Semiconductor Engineering.

Read for later

Articles marked as Favorite are saved for later viewing.
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview