Loading...

Follow Sierra Circuits Blog on Feedspot

Continue with Google
Continue with Facebook
or

Valid
Yes, It may Ruin Your PCB In-house Party.

Traces within the PCB are used to connect various components to various connectors. These traces can be identified as continuous paths of copper that exist on the surface of a circuit board. The trace width becomes crucial as it directly impacts on the working of the PCB. Additionally, increasing electricity flowing through PCB traces produces an immense amount of heat. Monitoring trace widths also helps minimize the heat build-up that typically occurs on boards. The conductor width also determines the resistance of the traces that directly affect the electricity flow.

Many manufacturers opt for the default trace width available, which may not be suitable for high-frequency applications. Moreover, depending on the application, the trace width varies, thus affecting the current carrying capacity of the trace. The trace width is considered as one of the most important design parameters during the PCB design. It becomes paramount to decide the adequate trace width to ensure the quality performance of the PCB. This also helps ensure the safe transportation of current without overheating and damaging the circuit board.

Sierra’s Online Tool: Impedance Calculator

We have developed an online tool for calculating the overall value of the minimum trace width. The amount of current and copper weight helps determine the minimum trace width. We offer thicker conductor traces for higher current requirements. We also offer a thicker copper weight to allow for thinner traces.

Important Parameters

They are various factors that can affect the selection of the right trace width. Some of the key factors include the thickness of the copper layer, the type of bottom or top layer, and the length of the track. The special design guidelines are prepared for traces on PCB. These design guidelines are prepared particularly for the inner layers of the circuit board. The heat cannot easily escape through these inner layers.

Other factors such as dielectric height and dielectric constant (Dk) will also determine the trace width. We also consider other essential parameters such as the inductance and capacitance of the trace, and the propagation delay as well. This allows us to calculate the trace width precisely to a great extent as well. We also understood the need for improvements in the signal integrity of the circuit signal. This has helped us develop an Impedance Calculator for single-ended and differential pair signals. Furthermore, maintaining proper signal integrity in the PCB reduces losses such as copper losses and noise.

Key Parameters for Single-ended and Differential Pair

Having said all of this, we typically suggest our customers to opt for larger traces in order to prevent broken connection, while there is the availability of larger space on the PCB.

Establishing Relationship Between Current Carrying Capacity of Conductors and Trace Width

The criticality of the trace width calculation also depends on parameters including the PCB copper foil cross-sectional area, the maximum current carrying capacity, and the consistent temperature rise. Additionally, parameters such as the conductive material selection and the current carrying capacity vary as per types of conductors including internal conductors and external conductors. We already defined the maximum current carrying capacity of internal conductors as half of that of external conductors.

The copper foil cross-sectional area is directly proportional to the trace width. We can also say that the temperature rise and maximum current carrying capacity are dependent on external conductors and internal conductors.

Importance of Maximum Current Carrying Capacity

The maximum current carrying capacity of the copper trace usually differs from the theoretical value due to several factors. Some of the factors include the number of components, pads, and vias. Moreover, a super large transient surge can lead to the burning down of a trace between pads during the initial supply of power or order modification that is implemented on traces.

To solve such complex issues, we prefer to increase the trace width. However, certain situations may arise that require a smaller trace width so solder mask can be applied on traces. The solder mask is typically applied to potentially burn down the PCB trace area. The solder paste can also be applied on the SMT (Surface Mount Technology) procedure. With the help of reflow soldering, the current carrying of the conductor is also increased.

In simple words, it is highly preferred to calculate the PCB trace current carrying capacity in order to decide the precise trace width. However, other external factors such as dust or contaminant pollution are also considered in real printed circuit fabrication or assembly. The excess of pollution leads to partial traces broken.

The post How Does Trace Width Calculation Impact PCB Design? appeared first on Sierra Circuits.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Remember when your 5th-grade physics teacher told you about “work done”. That no matter how much you try, it’s not “work done” unless you have a displacement. So basically, there is no point in wasting your energy pushing a wall, unless you’re equipped with superpowers. Same is with your ideas and schematics. If you do not have the aid to implement them in the physical world, you’d better be pushing walls and hope them to move. Here we are going to talk about something that will make you capable of moving a wall.

No, we are not at all here to talk about superpowers, that might exist in some parallel universe but not on earth (as of now). But we do have something that distinguishes us from any form of the living creature, brains. And that is supposedly our superpower. A man’s brain can apparently think of colonizing Mars. Therefore, what we discuss here might not be the best thing that man came up with, but surely it has made the life of manufacturers a lot easy breezy.

 In this article, we will discuss Computer Aided Manufacturing (CAM). Computer Aided Manufacturing for PCBs

When you need something really produced, not just designed, CAM is the thing you need. Almost all kind of machines requires some sort of control systems to operate. There may be as many kinds of control systems as man wants. For example, manual control, automatic control, computer control or remote control. When it comes to mass production, machines need to iterate precise, speedy and automatic actions continuously.

Since about 1970 manufacturing firms have seen a growing trend towards the use of computers to communicate instructions directly to the manufacturing machines. Therefore, CAM in simple language is the automation of the manufacturing process with the help of software and computer-controlled machinery. A CAM system usually tends to control the production process through different degrees of automation. Because all of the manufacturing processes in a CAM system is computer controlled, a high degree of precision can be achieved that is not possible with a human interface. The CAM system, for example, sets the toolpath and executes accurate machine operations based on the imported design. Some CAM systems can also bring in additional automation by also keeping track of materials and automating the ordering process, as well as tasks such as tool replacement.

Based on what we’ve understood so far, you need to take care of three aspects for a CAM system to function:
  • Software that instructs a machine on how to make any product by generating tool paths.
  • Machinery that can turn raw material into a finished product.
  • Post Processing that converts tool paths into a language machines can understand.

Since the age of the Industrial Revolution, the manufacturing process has undergone many dramatic changes. Introduction of Computer Aided Manufacturing is one of those most dramatic changes. Eventually, the manufacturers became capable enough where there are no designs too tough for any capable machinist shop to handle. The technology that evolved from the numerically controlled machines of the 1950s, that were directed by a set of coded instructions contained in a punched paper tape. Today that technology can control virtually any sort of manufacturing process.

CAM in use for PCB manufacturing Before we can talk further about CAM, we should talk about CAD.

Computer Aided Manufacturing is commonly linked to Computer Aided Design (CAD) systems. CAD focuses on the design of a product or a part. How is it supposed to look, how it should function? CAM focuses on how to make it. So, therefore, CAD without CAM is like pushing that wall, no work achieved. Every engineering process commences in the world of CAD. Engineers can make either a 2D or 3D drawing, whether that’s as complex as the electronics in a circuit board, or as lame as the design of the bathroom faucet. Hence, CAD design is called a model and contains a set of physical properties that will be used by a CAM system.

When a CAD completes its designing, it’s then fed into CAM. This is traditionally done by exporting a CAD file and then importing it into CAM software. Once your CAD model is imported into CAM, the software starts preparing the model for machining. Machining is the controlled process of transforming raw material into a defined shape through actions like cutting, drilling, or boring.

CAM software prepares a model for machining by working through several actions, including:
  • Checking for any geometrical errors impacting the manufacturing process.
  • Creating a toolpath for the model, it is basically a set of coordinates the machine will follow during the machining process.
  • Setting any required machine parameters including cutting speed, voltage, cut/pierce height
  • Configuring nesting where the CAM system will decide the best orientation for a part to maximize machining efficiency.

Computer Aided Manufacturing and Computer-aided design together facilitate mass customization. Without CAM, and the CAD process, customization would be a time-consuming, manual and costly process.

No idea about what that is? And how is it better with CAD/CAM?

 It is the process of creating small batches of products that are custom designed to suit each particular client. CAD software makes customization heck-free and allows rapid design changes. The automatic controls of the CAM system make it possible to adjust the machinery automatically for each different order.

Therefore, everything seems pretty impressive until now. Once CAD prepares the model for machining, simply all of that information gets fed into the machine to physically produce that very part. But how do you think is the machine instructed? Like, “Hey machine this is the 3-D schematic of my circuit board, please present me with its real-world version.” No right, we can’t just give a machine a bunch of instructions in English, we need to speak the machine’s language. To do this we convert all of our machining information to a language called G-code. This is the set of instructions that controls a machine’s actions including speed, feed rate, coolants, etc.

G-code is easy to read once you understand the format. An example looks like this:

G01 X1 Y1 F20 T01 S500

This breaks down from left to right as:

  • G01indicates a linear move, based on coordinates X1 and Y1.
  • F20sets a feed rate, which is the distance the machine travels in one spindle revolution.
  • T01tells the machine to use Tool 1, and S500 sets the spindle speed.
Until now we kept saying machines, machining, fed into machines and blah blah. But what are these machines? And how do they work with G-Code?

A variety of Computer Numerical Control (CNC) machines are being used to produce engineered parts. The process of programming a CNC machine to perform specific actions is called CNC machining.

Earlier in the past when CNC was not in action, manufacturing centers were operated manually by Machinists. Computer and automation are kind of ‘bffs’ as usually called by millennials. Wherever a computer goes, automation follows and here it was no exception. These days the only human intervention required for running a CNC machine is loading a program, inserting raw material, and then unloading a finished product.

CAM In Progress Before we end this topic, let’s talk about CAM and man.

Back in the days of manual machining, being a Machinist was a something really huge that took years of training to perfect. A Machinist had everything on his head– read blueprints, know which tools to use, define feeds and speeds for specific materials, and carefully cut apart by hand. It wasn’t just about being precise. Being a Machinist was, and still is, a matter of both art and science. Skills that earlier took years and years to master can now be conquered in a fraction of the time. New machines and CAM software have given us more control than ever to design and make better and more innovative products.

Even though CAM is making a world of automatic manufacturing, the specter of robots completely replacing humans is still a delusion. Robotic arms and machines etc though are commonly used in manufacturing, but they still require human workers. The job description of those workers change, however.

The post Computer-Aided Manufacturing: The Superpower That Makes Things Real appeared first on Sierra Circuits.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Vias are miniature conductive pathways drilled into the PCB to establish electrical connectivity between the different layers. Basically, a via is a vertical trace in a PCB.

Before we dive deep into a via hole, I will just define in simple terms what a PCB is. PCB is the art of transmission of signals under controlled parameters. A printed circuit board is the groundwork for interconnection of components. The main purpose is to form an electrical connection between the active and passive components without interrupting or interfering with another signal or a connection. So, the basic idea is to form networks of connections without conflicting another one. Hence, PCB is the connection of components where the connections don’t overlap each other.

To achieve this criterion, PCBs are made up of multiple layers. But how are these multilayers connected to each other in order to establish the electrical continuity? This is when a via pops into the picture.

As mentioned before, vias are tiny conductive tunnels that connect different layers of the PCB and allow signals to flow through them. The precision to which a manufacturer can drill a via that meets the designer’s requirement, makes that PCB manufacturer the best in the industry. It’s always a good practice to find out your manufacturer’s capabilities before designing your circuit.

Aspect Ratio

Aspect ratio (AR) is the parameter that decides the reliability of a PCB. Let us understand the concept of aspect ratio before discussing more about vias. Aspect ratio is the ratio between the PCB thickness and the diameter of the drilled hole.

Aspect ratio = (Thickness of the PCB) / (Diameter of a drilled hole)

The aspect ratio plays a prominent role in the plating process during the PCB manufacturing. The plating solution must flow efficiently inside the drilled holes to achieve the required copper plating. The holes that are small compared with the board thickness can lead to non-uniform or unsatisfactory copper plating. The larger the aspect ratio, the more challenging it is to achieve a reliable copper plating inside the vias. Hence, the smaller the aspect ratio, the higher the PCB reliability. At Sierra Circuits, we offer an aspect ratio of 0.75:1 for microvias.

Microvias Aspect Ratio. Image credit: Roozbeh Bakhshi Aspect Ratio Chart

Different Kinds of Vias Types of Vias

Depending on their functionality, there are different types of vias that are drilled into a PCB.

  • Through hole vias – Hole penetrates from the top to the bottom layer. The connection is established from the top to the bottom layer.
  • Blind vias – Hole penetrates from an exterior layer and ends at an interior layer. The hole doesn’t penetrate the entire board but connects the PCB’s exterior layers to at least one interior layer. Either the connection is from the top layer to a layer in the center or from the bottom layer to some layer in the middle. The other end of the hole cannot be seen once the lamination is done. Hence, they are called blind vias.
  • Buried vias (hidden vias) –These vias are located in the inner layers and have no paths to the outer layers. They connect the inner layers and stay hidden from sight.

As per IPC standards, buried vias and blind vias must be 6 mils (150 micrometers) in diameter or less.

Microvias

Microvia Description. Image credit: Roozbeh Bakhshi

The most commonly known vias are the microvias (µvias). During PCB manufacturing, microvias are drilled by lasers and have a smaller diameter compared to the standard vias. Microvias are implemented in High-Density Interconnection (HDI) PCBs. The depth of a microvia isn’t usually more than two layers deep since the plating of copper inside these small vias is a tedious task. As discussed earlier, the smaller the diameter of a via, the higher should be the throwing power of the plating bath to achieve electroless copper plating.

Types of Microvias

Microvias can be classified into stacked vias and staggered vias based on their location in the PCB layers. Additionally, there is another type of microvias called skipvias. Skipvias skip one layer, meaning they pass through a layer making no electrical contact with that specific layer. The skipped layer will not form an electric connection with that via. Hence, the name.

Microvias improve the electrical characteristics and also allow miniaturization for higher functionality in less space. This, in turn, makes room for large pin-count chips that can be found in smartphones and other mobile devices. Microvias reduce the layer count in printed circuit board designs and enable higher routing density. This eliminates the need for through-hole vias. The microvias micro size and capabilities have successively increased the processing power. Implementation of microvias instead of through holes can reduce the layer count of PCBs and also ease the BGA breakout. Without microvias, you would still be using a big fat cordless phone instead of your sleek little smartphone.

Tented Via

Sometimes a via is covered with solder mask so that the via isn’t exposed. This is called as a tented or a covered via.

Since now we have a better understanding of vias, let’s come to the most important part, the via in pad. Sometimes also referred as via in pad plated over.

Via-in-pad or Via-in-pad Plated Over (VIPPO)

The increasing signal speed, board component density, and PCB thickness have led to the implementation of via-in-pad. The CAD design engineers implement VIPPO along with the conventional via structures in order to achieve routability and signal integrity requirements.

Via In Pad Vs Traditional Via

So, what is a via-in-pad? Let me explain. In traditional vias, the signal trace is routed away from the pad and then to the via. You can see this in the above diagram. This is done to avoid seepage of the solder paste into the via during the reflow process. In via in pad, the drilled via is present right below a pad. To be precise, the via is placed within the pad of a surface mount component.

Traditional Dog Bone and VIPPO Vias. Image credit: Cisco Systems, Inc.

First, the via is filled with non-conductive epoxy depending on the designer’s requirement. Later, this via is capped and plated to provide conductivity. This technique shrinks the signal path lengths and as a result eliminates the parasitic inductance and capacitance effect.

The via-in-pad accommodates smaller component pitch sizes and shrinks the PCB’s overall size. This technology is ideal for BGA footprint components.

To make things better, back-drilling process is implemented along with the via-in-pad. The back drilling is performed to eliminate the signal reflections within the unused portion of the via. The unwanted via stub is drilled to remove any kind of signal reflection. This ensures signal integrity.

Vippo with back drill

In conclusion, vias are basically wells but not big enough to drop a coin and make a wish. The via technology implemented by your PCB manufacturer could make or break your product. The next time you run into a wishing well do remember to wish for a perfect via.

Quick PCB Design Tips

Here are a few quick tips that you can consider while employing vias in your design:

  • Avoid blind and buried vias – These require more drilling time and extra laminations. This can increase the cost of the overall PCB.
  • Stacked and staggered vias – Choose staggered instead of stacked vias since the stacked vias need to be filled and planarized. This process is time consuming and expensive as well.
  • Keep the aspect ratio minimum. This provides better electrical performance and signal integrity. Lower noise and crosstalk, and lower EMI/RFI.
  • Implement smaller vias. This can help you build an efficient HDI PCB since the stray capacitance and inductance gets reduced.

The post Via: The Tiny Conductive Tunnel That Interconnects the PCB Layers appeared first on Sierra Circuits.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Once upon a time there was a company who dominated the world of electronics with whatever existing technology they had. But the thirst for more made them dive into the market of emergent technology. From the past few years LG is working on its brand ThinQ, where they are integrating AI features in all kind of products. And now LG designed custom hardware ,i.e., the IP chip to enable on-device AI processing in “future robot vacuum cleaners, washing machines, refrigerator and air conditioners.”

LG said the new IP chip includes its own neural engine that will improve the deep-learning algorithms used in its future smart home devices.

Furthermore, the chip can operate without an internet connection thanks to on-device processing. Again, it uses “a separate hardware-implemented security zone” to store personal data resisting hackers who could remotely control a unit or use your appliances’ sensors to invade your privacy.

We already have big names like Google and Apple raising up their game with custom AI hardware. Tesla has also rolled its dice with its own processor to handle self-driving features.

Similarly, LG IP chip will also possess features like “image intelligence” to recognize space, location, objects and users; “voice intelligence” to recognize the user’s voice or noise; and “product intelligence” to strengthens a product’s original functions by detecting physical and chemical changes. Its deep learning ability processes video and audio data to provide tailored AI services for customers. To be very specific, the IP chip is capable of performing on-device functions with which it can learn and infer on its own even when there is no network connection. These neural developments will make life so much easier by handling complex tasks without a cloud connection.

This will probably put LG in the front row for AI competition. Therefore, we now just wait and see a wave of new intelligence to come and conquer. This is just the apt time for advancing our human selves for a crazy time.

The post LG in house IP-The Intelligent Processor To Make “Life’s Good” appeared first on Sierra Circuits.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This presentation will discuss how to properly select PCB laminates or materials. Before selecting begins, there are many factors to consider. Make sure material characteristics fit your specific board requirements and end application. Today, we will focus on the dielectric properties, cost, and manufacturability of materials suitable for high-speed PCB designs.

One of the main problems we face when manufacturing PCBs is the PCB designers’ frequent over-reliance on the material data sheets. Don’t get me wrong, data sheets provide designers with a thorough description of a material’s electrical properties, and electrical properties are the primary consideration for high-speed applications. However, the data sheets fall short when taking into account various real-world manufacturing concerns, and real-world manufacturing concerns matter because they impact yield and cost.

PCB Material Categories

So, let’s dive right in. When selecting high-speed PCB laminates, what are the primary concerns that must be concerned in regards to one, manufacturability, and two, cost? Let’s take a look at this chart. For convenience, we’ve classified important materials into buckets based on the material’s signal loss properties.

On the left, we have materials like FR-4. These are your standard, everyday materials, such as materials from China like Nan Ya, and materials from the US, like Isola. FR-4 are standard materials that can be used in any application, but they are also the lossiest laminates. It can also have a plethora of other electrical and mechanical issues, and if you’re having issues with FR-4, shoot me an email. I would love to help.

As we move across the chart, you can see the less lossy, higher speed application PCB materials. Rogers 4350 performs similarly to Megtron 6 and Itera, so these are the materials you should consider when you need that level of performance.

Signal Loss and Operating Frequency

So, the questions arise. What PCB material properties account for the difference in the PCB electrical performance, and how do these differences affect the PCB material cost? As it turns out, there are three main factors to evaluate when it comes to material performance for high-speed PCB designs. What is the signal loss at the operating frequency? Should you be concerned about the weave effect, and how easy is the material to manufacture your stack-up in construction?

First, let’s take a look at the relationship between signal loss and operating frequency. As you can see from the graph, there’s a direct correlation between signal loss and frequency. At the same time, we can also see that certain materials are less lossy than others. This was the basis we used to create our material classification bucket on the previous chart. This graph shows which materials could possibly perform better electrically at higher speeds.

Next, let’s compare the direct cost based on our material classifications. As you can see from the chart, less lossy materials cost more. You’ll have to decide what materials work best for your specific project. As you can see, the Rogers 4350B material is higher than that of Megtron 6 or Itera, even though electrical performance is similar. In the microwave category, the Taconic RF-35 is about 30% less expensive for the same performance as other materials in this category.

Non-PTFE Materials

Let’s do a deeper dive into the non-PTFE materials. We’ll come back to the PTFE materials in a bit. Now, all of these materials perform somewhat similarly and at somewhat similar costs, but what justifies the cost differences, and what is the advantage of working with higher cost materials?

First, we must understand material construction, and the effects of glass on characteristic impedance must also be understood. One way to achieve this is by understanding the weave effect and the different types of glass cloth. As you can see, different glass construction will effect DK distribution. A board with a loose weave will have greater variation in board thickness and greater variation in DK distribution. However, a tight weave will have a more consistent board thickness and more even DK distribution. The effective DK of the material remains the same as the signal traverses the dielectric.

What’s really important to note from a manufacturing perspective is that a board with a tighter weave is easier to manufacture. When the glass weave is more consistent, mechanical laser drilling also becomes more consistent.

Aside from the glass weave, there are two other types of glass to choose from, Si glass or E-glass. E-glass is the predominant glass type. It varies thickness between 1.3 mils to 6.8 mils. Looking at the chart, you can see the DK of the E-glass at 5 gigahertz is 6.5, while the DF is .006. Now, Si glass is much purer than E-glass, and as a result, the DK of 5 gigahertz for the Si glass is 4.5 and the DF is .004. The cost of the laminate compared to E-glass is about 15% higher, well worth it, if you ask me.

Indirect Costs

Now, let’s look at some indirect costs. It’s very important to remember that we can’t just look at the cost of the material by itself. In fact, most of the overall cost doesn’t come from direct material costs at all, but from the PCB process cost associated with that PCB material.

One of the key factors that impact cost in manufacturing is lamination. Materials in the medium, low, and extremely low loss categories generally manufacture the same way as FR-4, although some materials are more dimensionally stable than others, and some materials are easier to laser.

On the other hand, materials in the RF microwave category do not register like non-PTFE category materials, and thus become very difficult to manufacture, especially in multi-layer stack-ups. This difficulty is primarily because PTFE materials have a problem of stretching. Usually, we use scrubbing to prep materials before lamination, but for this category of materials, scrubbing is a problem. We have figured out for sure how to achieve reliable adhesion after the lamination, but it’s still difficult.

Key Manufacturing Considerations

So, next, let’s discuss key manufacturing considerations when dealing with hybrid PCB stack-ups. First, make sure all the materials in your hybrid stack-up are compatible with your lamination cycle. Some materials need higher temperatures and pressures than others in the lamination process. Before you submit your design, check your material data sheets to confirm compatible materials are being used.

The second consideration in hybrid stack-ups is drill parameters for proper hole formation. Feeds and speeds of the drill bits vary based on materials in the stack-up. If you have a stack-up which is a pure construction, meaning it’s all the same material, versus a hybrid construction, the feeds and speeds have to be adjusted. For example, certain settings generate a lot of heat and if the material cannot withstand the heat, there can be some deformation. You should also take into account that different materials drill differently. Rogers, for example, wears the drill bits down faster and thus impacts the cost.

After drilling and before cuposit, there is hole wall preparation. Different materials require different plasma. A dirty little secret among manufacturers is that not all of us refine our process per the material. There can be process guidelines per general category, but for absolute reliability and on-time delivery, the manufacturers should be refining their process per material.

Stack-up Guidelines for Mixed Materials

Next, we’ll review three stack-ups and go over some basic stack-up guidelines for mixed materials. Stackup number one is a pure Rogers stack-up using Rogers 3000 materials. It is a multilayer construction that requires longer dwell times at higher temperatures. This lamination process is known as fusion bonding. Only a select few manufacturers, like Sierra Circuits, have the equipment and the expertise to perform this operation.

Stack-up number two is a hybrid stack-up using Rogers and Isola materials. Designers use this method to save on material cost and to aid in the manufacturability of multilayer stack-ups. Rogers is not suitable for sequential lamination process, and there are other material vendors like Taconic and Isola who make materials that perform similar to Rogers and do not have these limitations. In the past, it’s been difficult to control the press-out thickness of these B-stage materials. Now, with better equipment, better process controls, customers can expect consistency and reap the benefits.

Third and last is a stack-up consisting solely of Taconic materials. These materials, although based on glass cloth, have similar performance to Rogers materials and are much easier to manufacture. With glass cloth, materials also become dimensionally stable.

Hybrid Stack-up Guidelines

Now, let’s discuss some hybrid stack-up guidelines. We recommend the following when dealing with a hybrid construction. Use the high-performance material as the core. Laminate with FR-4 pre-preg. Balance the FR-4 portion, and don’t use a high Tg dielectric or bonding film with a lesser Tg material.

So, there you have it. We reviewed how to select high-speed materials based on performance and cost, including manufacturability, and I’ve reviewed three relatively complex stack-ups.

Take our PCB Material Quiz before you return to your design!

The post Comparing the Manufacturability of PCB Laminates appeared first on Sierra Circuits.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Planning for a Europe tour? Or maybe looking for tickets to Miami? What about a trip to the moon? And on the way if you come across a planet you like; you can probably book yourself some land for your long-longed farmhouse. So next time you have a place to stay in space. You must be thinking this writer has lost her marbles and why is she still? Well, that’s because all I am writing is true. Maybe not right now but in a while, this will happen. And when this happens, it’s going to change our lives probably forever.

On Thursday, Jeff Bezos, the richest man on earth and the founder of Amazon and Blue Origin, presented a new moon lander called Blue Moon, along with a small rover. He referred to a concept given by Gerard O’Neil in 1975 which talks about a grand, multi-generational vision of one day creating enormous space colonies in close proximity to Earth, as a way of expanding humanity to a trillion people. He is really enthusiastic about using this new lander “Blue Moon” as step stone towards his vision.

Blue Moon was in making for three years and is a prototype. Bezos claimed that a larger version of this lander will enable Americans to pull off the deadline put forward by Trump’s administration. It will bring back Americans on the moon by 2024. It will be able to carry the rover that could do scientific missions and shoot off small satellites.

Blue Moon and its rover landing on the moon… soon… – Image credit: Blue Origin

Before this Blue Moon, we also heard about Elon Musk’s larger rockets for orbital space travel. We also heard about space tourism, where you can actually pay and experience micro-gravity. Sounds like a space series? Richard Branson’s Virgin Galactic is actually working on space tourism. According to Bezos, Earth is “finite”, so we would eventually need to seek room into space. So, that human existence can be ensured. Blue Origin and its previous work should help provide proper infrastructure for space.

There should always be someone capable and trustworthy to whom we can leave behind our legacies. That is our future generation, Bezos keeping that in mind unveiled the creation of the Club for the Future, which will focus on space-related activities for youths in grades K-12. Bezos’ company is also building a bigger rocket, New Glenn, that will compete with orbital class rockets like the SpaceX Falcon 9 to deliver commercial satellites and other large payloads to orbit.

We can expect the first launch of Blue Origin’s big rocket new Glenn by 2021. So, people fasten your seat-belts, stay put to your seats and get ready to zoom away to space.

The post Blue Moon: Blue Origin’s Newest Lander is Ready to Take us to the Moon appeared first on Sierra Circuits.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Before you move on to something else, ta-da! An affordable Google Pixel phone is here. And the best part is, it comes with a premium class camera. There! I believe we have your attention now.

Google, the God of the internet, has just launched the new Pixel 3a and Pixel 3a XL smartphones. In an attempt to attract new customers, Google is entering into the global market with low price versions of Pixel 3.

The Pixel 3a offers a 5.6-inch screen while Pixel 3a XL comes with a 6-inch screen. The phones feature a 4 GB ram, a 12.2mp rear camera, and an 8-megapixel front Camera. Both the phones run on Android P, out of the box version which has been the selling point of Google phones so far. The Pixel 3a retails at $399 and the Pixel XL at $479.

Google’s Pixel 3a, the low price version of the Pixel 3. – Image credit: Google The Happy Part

The Pixel 3a comes at a compelling price. Quick charge according to Google offers 7 hours of battery life in a 15-minute charge. The Pixel 3a comes with the flagship level pixel camera which makes it stand out from the crowd. And it features an FHD+ display.

The Pixel 3a also comes with a 3.5 mm headphone jack. And it has one of the best point and shoot cameras out there.

Your battery will last 7 hours after 15 minutes of charging time. – Image credit: Google The Sad Part

The smartphone is made of a polycarbonate body. In simple words, a plastic body. The 600 series Snapdragon processor, as opposed to 800 series in the Pixel 3, might steal your smile sometimes. And the speaker is located at the bottom with a USB type C connector.

Specs
  • Display 5.6-inch and 6.00-inch
  • Qualcomm Snapdragon 670 processor
  • 8-megapixel front and 12.2- megapixel rear camera
  • Resolution 1080 x 2160 and 1080 x 2220 pixels
  • 4 GB RAM
  • OS Android 9 Pie
  • Storage 64 GB
  • Colors: black, white, and purplish
  • Battery: Pixel 3a -3000mAh, Pixel 3a XL -3700mAh

To conclude, the phones are a lot like the Pixel 3 but with a different name and body. A budget smartphone with a premium camera is what people would like to have in their pockets now. With the launch of one plus 7 Pro around the corner, will the Pixel 3a make an impact? Let’s see what happens.

The post The Not So Expensive Google Pixel 3a Is Here appeared first on Sierra Circuits.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

We had a busy weekend in Chicago where we attended KiCon 2019. On Friday 26th and Saturday 27th of April, Sierra sponsored the first KiCad users conference. Or, as KiCon said it best, “The first and the largest gathering of hardware developers using KiCad.”

Sounds awesome, doesn’t it? Well, it was. Let’s dive into the recap of this event full of surprises.

What is KiCon?

KiCon 2019 - YouTube

If you listen to Amit Bahl, our Director of Sales and Marketing, ”it’s an amazing place where innovation around hardware is happening. Sierra, hopefully with KiCad, can make a big difference in hardware development.” Because this is what KiCon is all about: hardware development and innovation. Printed circuit boards are the heart of electronics. The whole tech industry relies on them. And where does it all start? With a good design, of course.

For Chris Gammell, KiCon organizer and electronics design consultant, the main goal was ”mostly getting it off the ground! But I was also hoping to bring together the developer and the manufacturing community.” Mission accomplished.

KiCon brought in experienced electronics designers to share their best practices. The classes taught attendees how to get the best out of KiCad to build advanced products. One of the main points of focus of the conference was to show users what’s “under the hood” of KiCad. The end goal was to demo how you can use the tool to supercharge your next design.

What about the attendees? Who were they? As Chris Gammell told us, there was a wide range of PCB designers and KiCad users. ”Some people were learning KiCad for the first time. Others were learning expert level skills. Some people were just there networking too!”

The Manufacturer Panel

The Manufacturer Panel At KiCon 2019 - YouTube

Designing for manufacturing can be a constraint. This is when theory meets reality. A good PCB designer knows to design with their manufacturer’s capabilities in mind. If not, it might just fail. This is why KiCon arranged a manufacturer panel for designers to share their doubts and issues and get some answers from the experts.

What’s next?

It seems like you can count on a second edition next year. According to Chris Gammell, KiCon 2020 will probably be a thing. And before that, KiCad is even thinking about going to Europe. As you can see, this is already becoming a well-oiled designer conference!

The post Sierra sponsored KiCon 2019 and we’re glad we did. Here’s our recap! appeared first on Sierra Circuits.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The leap of advancement from the conventional phototool to laser direct imaging (LDI) was fascinating but tough. Let’s find how we sailed this journey.

Introduction: Why the change?

We have often heard “old is gold”, but this does not apply to everything, especially when it comes to technology. For technology, it’s usually old is old and requires an upgrade. If you stay loyal to a specific technology for more than a specific time, you will eventually become obsolete. So, no matter what industry you are in, you need to update yourself to the latest version. Sounds quite simple, right? But that’s not the scenario, at least not always. Sometimes, it is quite a hustle to upgrade your existing technology. Technological advancements may sound very fascinating when we say it like that, but take in a lot of comprehensive efforts. 

Our PCB world is no exception.

In our world also we have to keep up with customer demands, which always tend to rise upwards. As the fine line between need and desire is getting blurred, the technology industry is coming up with products fancier than our imagination. Our electronics is getting sleeker and miniaturized day by day. Today’s PCB manufacturers are driving for producing not only smaller, faster, lighter, less expensive, more complex and reliable PCBs but are also constantly adapting to new product designs.

PCB manufacturers are thriving to produce high-density interconnect (HDI) boards at significantly minimized cost and lower implementation time. Board complexity is growing every day. During the past five years, an average track width in conventional circuit boards went from 200 µm to 100 µm. Today the call for 50/50 µm or even 25/25 µm is real. Existing technologies are unable to offer acceptable options to our growing requirements. The conventional film exposure process generally used for the photolithography structuring of the conductive pattern was efficient until now. However, now it limits our capabilities, to track-widths and track-spaces of < 150 µm.

The inevitable change

Therefore, incorporating a few necessary modern strategies in production process became inevitable. The most important of these strategies was the exposure of the circuit patterns directly on boards by the laser system. As I said earlier, it’s not so easy as we make it sound. Before we could bet our money upon this challenge, we had developed and tested several new technologies. Laser direct imaging has managed to prove itself, as the best and most comprehensive imaging solution for HDI boards.

By this time, you must have understood laser direct imaging, as its name suggests, is an imaging process. It is the process of imaging circuits on boards directly, i.e., it doesn’t use phototool or mask. LDI processing requires a board with a photosensitive surface, positioned under a computer-controlled laser. Therefore, we expose the photosensitive resist by the means of a laser beam which is switched on and off by computer-controlled system and is scanned across the board panel. The laser used in this process is often assumed to be in the UV spectrum as this tends to suit most of the commonly available photoresists. There are also systems that operate in both the visible and infrared spectrum, working with specially formulated photoresists. The most common photoresist we use these days is UV sensitive.

History of laser direct imaging: How they barged into the conventional PCB process?

Since time inception in PCB the majority of manufacturers have been producing circuit boards using films to create circuit patterns in a photoresist. This technology utilized transparent film with a circuitry pattern on them. The exposure process is carried out inside UV exposure machines, which utilized high-power lamps as the source of UV light. This technology is called photolithography.

But since the late 2000s, since 2004 to be precise, laser direct imaging has become the most comprehensive imaging solution for HDI boards. It was introduced to the market by the Orbotech company with their Paragon 9,000 machine. LDI uses a focused laser beam to directly expose a circuit board panel coated with photoresist. Thus, it eliminates the use of masks, whereby inherent problems (which we will discuss in later parts). But when this type of imaging was first introduced, somewhat 20 years ago, throughput was an issue. Therefore, it was used only in low-volume or prototype runs. But with time, subsequent advancements in LDI equipment and faster acting photoresists have made it feasible for high-volume runs.

Old but not so gold traditional method problems

PCBs always consist of a base material on which a copper clad is applied. For the structuring of the conductive pattern, the material surface is coated by a photosensitive laminate (photoresist) called laminating process.

Imaging or creating a circuit pattern is one of the most fundamental steps of board fabrication. There is a multitude of possible steps to image a circuit but typically the sequence is:

  • Coat resist on copper laminate
  • Exposing the circuit design on to the resist
  • Develop the unexposed resist
  • Etch the exposed copper

This above sequence can be put as:

A light sensitive photo-resist is exposed and developed to create a pattern that selectively protects the copper from etching. After copper etching, the resist mask is stripped away and the remaining copper forms the desired circuitry.

Photo-exposing usually uses phototool. The phototool is aligned to the resist-coated substrate and a high-intensity UV light is flooded through it. The UV light selectively pass through it leaving desired circuitry unhardened. Therefore, the copper gets exposed as the circuit pattern. But, the problem in this method is the precise alignment of the phototool to the substrate.

Why? Let’s find out.

The UV light source is of high intensity and very often “collimated”.

 Another confusion?

Let’s start from the beginning. Collimated light tends to transmit perpendicular to the phototool.  If we consider for an ideal situation, even if there is a distance between the photool and the resist (as discussed above), the light pattern exposing the resist would match the opening in the phototool and not spread out. Therefore, it is used for fine line circuitry. But there are still limitations like:

  • When we say highly collimated light, it does not necessarily mean perfectly collimated light. So, it might just spread.
  • Or maybe the glass of the phototool itself can refract or bend the light minutely but significantly enough to ruin the image.
  • Dust or dirt in the air or phototool can also refract the light. Therefore, the phototool itself is not always perfect.
  • Or flexible substrate can be really difficult sometimes, aligning the phototool to flex substrate can have minor distortion that in turn can reduce registration tolerance.

This all for collimated light, so, when it comes to non-collimated light; is not perpendicular as it tends to originate from a single point, there are poor circuit traces. That is because the light spreads even for a tiny gap between the phototool and the resist.

For 5 mils and above photo imaging lines and spaces collimated light will suffice but today’s boards are way denser. Therefore, it is no more “the best option” that can be offered.

Diving deep into laser direct imaging

LDI is our answer to most of our problems, if not to all, surfacing right now. HDI boards are the most popular in the PCB industry right now. Let’s first explore whether these are suitable for HDI boards.

It has been proved that these systems, which work in UV spectrum, are most suitable for obtaining fine lines and spaces below 2 mils. Capabilities of an LDI making it suitable for HDI.

  • Fine lines, traces and spaces down to at least 2 mils and below are possible.
  • Proper depth of focus ensuring imaging quality for high topography design. This is especially needed for uniform exposure of outer layers.
  • The system design holds good for various product types, materials, thicknesses, manufacturing technologies and production steps.
  • It is a flexible and precise registration system which is compatible with different manufacturing technologies and production.

Regardless of these and any other capabilities, LDI can be considered as an amazing break-through in the field of circuit board manufacturing.

Let’s now dig into LDI process, and see how can we describe it. We know how conventional PCB imaging works, trailing from there we have:
  • The substrate is coated with photosensitive resist.
  • The substrate is then positioned in the LDI unit.
  • LDI digitally prints the desired circuitry.
  • The resist is developed and the unwanted resist is etched off.
  • The resist is stripped off. The desired copper pattern remains.

As we know, unlike photo exposing, LDI does not use a phototool, but directly exposes a saved artwork pattern digitally onto the resist. Photoresist is partially or rather selectively exposed as the laser beams increase across the substrate in a rastering way.

Raster, or here rastering, means composing of tiny rectangular pixels, or picture elements, that are arranged in a grid or raster of x and y coordinates (includes a z coordinate in case of 3D) in such a way that it forms an image.

The image formation in here is likely of that on a CRT screen, which is formed from hundreds of horizontal lines across the screen. Like phototool, LDI also requires photoresist but in general they are specially formulated ones. But the application of photoresists in both the processes are identical. Even the post exposing process of an LDI board is the exact copy of phototool process.

Therefore, laser direct imaging is way superior to phototool in terms of HDI board fabrication. Still wondering why? Let’s take note of the following differences and in short see how LDI overpowers phototool:
  • Phototool demands regular expenses associated with storage, preservation, tracking and constant inspection. LDI avoids this entire process expense and hardships.
  • As the phototool is handled manually, dirt, dust, fibers, smears and scratches can easily degrade the phototool. LDI is fully computer-controlled and hence free from such problems.
  • Even under ideal conditions there can be light diffraction in a phototool.
  • There are repeated defects occurring due to the handling of phototools and also off-contact exposure.
  • The dimensional stability of phototools is not very up to the mark, i.e., its size changes with temperature and humidity.
  • Phototools are anisotropic in nature, therefore when it comes to PCBs with not so flourishing tolerances, taking the same value for both x-axis and y-axis can lead to serious inaccuracies.
  • There is no light leak, the beam is highly controlled and focused.
  • Image alignment is precise, the computer-enhanced optical alignment can automatically compensate for material distortion.

We see LDI removes a lot of variable coming into play for phototools. Consequently, the image lines or traces and spaces along with the substrate alignment are quite precise. But even with LDI, there are variables that limit a PCB in this circuit pattern density. Like the proper thickness of copper. A precise chemical etching process has its limitations, etc.

Even LDI can have a few issues. The main point is the processing time, while flooding a substrate with UV light is a matter of a couple of seconds, rastering an entire circuit pattern for obvious reasons will require more time. However, for circuit boards with tight tolerances and requiring less than 5 mil traces and spaces LDI is the only viable option right now. Therefore, LDIs are effectively eliminating the phototools as of now, until we can find something more precise.

The post Laser Direct Imaging: A Sharp and Precise Technology appeared first on Sierra Circuits.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The automotive industry is in for a treat! The young inventors of Stanford University are crafting a solar car which purely runs on clean energy. The Stanford Solar Car project is run by the students of Stanford with a primary focus on sustainable technology.

Like a tradition, the solar car team designs, assembles, and races a solar car across the Australian Outback in the Bridgestone World Solar Challenge, every two years. The team developed a sleek and aggressively designed solar car called Sundae, in 2017. Sundae featured an asymmetrical catamaran aerobody which made it look like a car from a sci-fi movie. This time we are sure that the team will surprise us with their new futuristic model. The new car will be unveiled in mid-July this summer.

The solar car project is categorized into different teams that synchronize together to build this prototype.

The Brains Behind The Wheels The Stanford Solar Car Project team working hard. – Image credit: Stanford Solar Car Project

To start with, the array team has made immense progress in designing and analyzing the different module layouts suitable for the solar car. This team is collaborating with Alta devices and has finalized its array design.

The battery team keeps the wheels in motion. Harvesting solar energy and efficiently storing it, puts the battery team in the driver’s seat. This team has made a significant contribution towards the design of their test pack. They are closing in on a stable race-oriented design.

The electrical team has successfully prototyped a potentiometer-based throttle and updated the steering wheel PCB. They have also completed a controller board which holds five light controllers. The team is grateful to Sierra Circuits for wrapping up the vehicle computer.

Stanford Solar Car Master Assembly. – Image credit: Stanford Solar Car Project

The mechanical team has been working on the chassis, suspension, and mechanical subsystems. This team has been running tests on their new carbon fiber which will form the aerobody of the car. The mechanical team has been constantly reviewing their designs with the help of Tesla, Joby, and Lucid Motors. The manufacturing will be starting this spring.

We are overwhelmed with all the hard work put into this entire project by these young minds. We believe the whole race is about technology moving ahead rather than the cars touching the finish line. But go Stanford!

The post Stanford Solar Car: The Green Car Makers appeared first on Sierra Circuits.

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview