Loading...

Follow HPC Today on Feedspot

Continue with Google
Continue with Facebook
or

Valid

The ISC High Performance 2018 contributed program is now open to diverse opportunities. We welcome submissions for the following sessions – birds of a feather, research posters, project posters and the PhD forum. ISC 2018 also calls on regional and international STEM undergraduate and graduate students interested in the field of high performance computing (HPC) to volunteer as helpers at the five-day conference.

It is the HPC community’s active participation in the above-mentioned programs that ultimately create a productive and distinct contributed program. In return, contributors will enjoy sharing their ideas, knowledge and interests in a dynamic setting and also have the chance to meet a wide network of people representing various organizations in the HPC space. ISC 2018 expects to attract 3,500 attendees.

The next conference will be held from June 24 – 28, 2018, in Frankfurt, Germany, and will continue its tradition as the largest HPC conference and exhibition in Europe. It is well attended by academicians, industry leaders and end users from around the world. The ISC exhibition attracts around 150 organizations, including supercomputing, storage and network vendors, as well as universities, research centers, laboratories and international projects. Important submission and notification of acceptance deadlines are available on the ISC website.

Birds of a Feather

These informal BoF sessions often bring together like-minded people to discuss current HPC topics, network and share ideas. Each session will be allocated 60 minutes to address a different topic and is led by one of more individuals with expertise in the area. If you are interested in hosting a BoF, please refer to the topics and submission guidelines on the ISC website.

The BoFs Committee will review all submitted BoF proposals based on their originality, significance, quality, clarity, and diversity with respect to the overall diversity goals of the conference. Each proposal will be evaluated by at least three reviewers.

The BoF sessions will be held from Monday, June 25 through Wednesday, June 26, 2018.

PhD Forum
The PhD forum is a great platform for PhD students to present research results in a setting that sparks scientific exchange and lively discussions. It consists of two parts: a set of back-to-back lightning talks followed by a poster presentation. The lightning talks are meant to give a quick to-the-point presentation of research objectives and early results, while the posters will provide more in-depth information as a starting point for deeper discussions.

Submitted proposals will be reviewed by the ISC 2018 PhD Forum Program Committee. An award and travel funding are available to students. Detailed information is offered here. The PhD Forum will be held on Monday, June 25, 2018.

Research Posters
The ISC research poster session is an excellent opportunity to present your latest research results, projects and innovations to a global audience including your HPC peers. Poster authors will have the opportunity to give short presentations on their posters, and to informally present them to the attendees during lunch and coffee breaks. Research posters are intended to cover all areas of interest as listed in the call for research papers. Visit the website for the full submission process. The ISC organizers will sponsor the call for research posters with awards for outstanding research posters: the ISC Research Poster Awards.

All submitted posters will be double-blind reviewed by at least three reviewers. Poster authors will give short presentations on their posters on Tuesday, June 26, and will also have the opportunity to informally present them to the attendees during lunch and coffee breaks. The accepted research posters will be displayed from Tuesday, June 26 through Wednesday, June 27, 2018.

Project Posters

Held for the second time, the project poster session allows submitters to share their research ideas and projects at ISC. The session provides scientists with a platform for information exchange, whereas for the attendees it is an opportunity to gain an overview of new developments, HPC research and engineering activities. In contrast to the dissemination of research results in the research poster sessions, project posters enable participants to present ongoing and emerging projects by sharing fundamental ideas, methodology and preliminary work. Researchers with upcoming and recently funded projects are encouraged to submit a poster of their work. Those with novel project ideas are invited to present as well.

All accepted project posters will be displayed from Monday, June 25 through Wednesday, June 27, 2018 in the exhibition hall in a prominent spot. During the coffee breaks on Tuesday and Wednesday afternoons, ISC attendees will have the opportunity to meet the authors at their posters to discuss their projects.

ISC Student Volunteer Program

If you are a student pursuing an undergraduate or graduate degree in computer science, or any STEM-related fields, and high performance computing (HPC) is on your radar, volunteering at ISC 2018 can help steer your future career in the right direction.
The organizers are looking for enthusiastic and reliable young people to help them run the conference. In return they offer you the opportunity to attend the tutorials, conference sessions, and workshops for free. They also encourage you to use the event to build your professional network. The conference after-hours experience is also fun. You will have the opportunity to intermingle with your peers from other educational institutions and international backgrounds. Such connections can go a long way as you develop your career. Visit the website to find out more.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
HPC Today by Christophe Rodrigues - 11M ago

If you want to break into AI, this Specialization will help you do so. Deep Learning is one of the most highly sought after skills in tech. We will help you become good at Deep Learning.

In five courses, you will learn the foundations of Deep Learning, understand how to build neural networks, and learn how to lead successful machine learning projects. You will learn about Convolutional networks, RNNs, LSTM, Adam, Dropout, BatchNorm, Xavier/He initialization, and more. You will work on case studies from healthcare, autonomous driving, sign language reading, music generation, and natural language processing. You will master not only the theory, but also see how it is applied in industry. You will practice all these ideas in Python and in TensorFlow, which we will teach.

You will also hear from many top leaders in Deep Learning, who will share with you their personal stories and give you career advice.

AI is transforming multiple industries. After finishing this specialization, you will likely find creative ways to apply it to your work.

We will help you master Deep Learning, understand how to apply it, and build a career in AI.

This specialization is created by :
deeplearning.ai is dedicated to advancing AI by sharing knowledge about the field. We hope to welcome more individuals into deep learning and AI. deeplearning.ai is Andrew Ng’s new venture which amongst others, strives for providing comprehensive AI education beyond borders.

For more information

About the specialization:


Course 1: Neural Networks and Deep Learning

Session start: Jan 1
Engagement: 4 weeks of study, 3-6 hours a week
Sub-titles: English, Chinese (Traditional), Chinese (Simplified)

About this session:
If you want to break into cutting-edge AI, this course will help you do so. Deep learning engineers are highly sought after, and mastering deep learning will give you numerous new career opportunities. Deep learning is also a new “superpower” that will let you build AI systems that just weren’t possible a few years ago.

In this course, you will learn the foundations of deep learning. When you finish this class, you will:
– Understand the major technology trends driving Deep Learning
– Be able to build, train and apply fully connected deep neural networks
– Know how to implement efficient (vectorized) neural networks
– Understand the key parameters in a neural network’s architecture

This course also teaches you how Deep Learning actually works, rather than presenting only a cursory or surface-level description. So after completing it, you will be able to apply deep learning to a your own applications. If you are looking for a job in AI, after this course you will also be able to answer basic interview questions.

This is the first course of the Deep Learning Specialization.

Course 2: Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization
Session start: Jan 15
Engagement: 3 weeks of study, 3-6 hours a week
Sub-titles: English, Chinese (Traditional), Chinese (Simplified)

About this session:
This course will teach you the “magic” of getting deep learning to work well. Rather than the deep learning process being a black box, you will understand what drives performance, and be able to more systematically get good results. You will also learn TensorFlow.

After 3 weeks, you will:
– Understand industry best-practices for building deep learning applications.
– Be able to effectively use the common neural network “tricks”, including initialization, L2 and dropout regularization, Batch normalization, gradient checking,
– Be able to implement and apply a variety of optimization algorithms, such as mini-batch gradient descent, Momentum, RMSprop and Adam, and check for their convergence.
– Understand new best-practices for the deep learning era of how to set up train/dev/test sets and analyze bias/variance
– Be able to implement a neural network in TensorFlow.

This is the second course of the Deep Learning Specialization.

Course 3: Structuring Machine Learning Projects
Session start: Jan 15
Engagement: 3 weeks of study, 3-4 hours/week
Sub-titles : English

About this course:
You will learn how to build a successful machine learning project. If you aspire to be a technical leader in AI, and know how to set direction for your team’s work, this course will show you how.

Much of this content has never been taught elsewhere, and is drawn from my experience building and shipping many deep learning products. This course also has two “flight simulators” that let you practice decision-making as a machine learning project leader. This provides “industry experience” that you might otherwise get only after years of ML work experience.

After 2 weeks, you will:
– Understand how to diagnose errors in a machine learning system, and
– Be able to prioritize the most promising directions for reducing error
– Understand complex ML settings, such as mismatched training/test sets, and comparing to and/or surpassing human-level performance
– Know how to apply end-to-end learning, transfer learning, and multi-task learning

I’ve seen teams waste months or years through not understanding the principles taught in this course. I hope this two week course will save you months of time.

This is a standalone course, and you can take this so long as you have basic machine learning knowledge. This is the third course in the Deep Learning Specialization.

Course 4: Convolutional Neural Networks
Session start: Jan 15
Engagement: 4 weeks of study, 4-5 hours/week
Sub-titles : English

This course will teach you how to build convolutional neural networks and apply it to image data. Thanks to deep learning, computer vision is working far better than just two years ago, and this is enabling numerous exciting applications ranging from safe autonomous driving, to accurate face recognition, to automatic reading of radiology images.

You will:
– Understand how to build a convolutional neural network, including recent variations such as residual networks.
– Know how to apply convolutional networks to visual detection and recognition tasks.
– Know to use neural style transfer to generate art.
– Be able to apply these algorithms to a variety of image, video, and other 2D or 3D data.

This is the fourth course of the Deep Learning Specialization.

Course 5: Sequence Models
Session start: Jan 2018
Sub-titles : English

About this course:
This course will teach you how to build models for natural language, audio, and other sequence data. Thanks to deep learning, sequence algorithms are working far better than just two years ago, and this is enabling numerous exciting applications in speech recognition, music synthesis, chatbots, machine translation, natural language understanding, and many others.

You will:
– Understand how to build and train Recurrent Neural Networks (RNNs), and commonly-used variants such as GRUs and LSTMs.
– Be able to apply sequence models to natural language problems, including text synthesis.
– Be able to apply sequence models to audio applications, including speech recognition and music synthesis.

This is the fifth and final course of the Deep Learning Specialization.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Kinetica Predicts AI and IoT Use Cases Will Drive Demand for Next-Gen Databases in 2018

Kinetica’s CTO and Cofounder Nima Negahban has come out with his top technology predictions for 2018. Today’s analytical workloads require faster query performance, advanced analysis methods, and more frequent data updates. For real-time analysis of massive data sets, particularly for use cases where time and location matter, enterprises are turning to new next-generation databases to explore data faster and uncover new insights.

“Based on its enormous potential, investments in AI can be expected to increase in 2018, while investments in IoT will need to show measurable return,” said CTO and Cofounder of Kinetica Nima Negahban. “The ability to operationalize the entire pipeline with GPU-optimized analytics databases now makes it possible to bring AI and IoT to business intelligence cost-effectively. And this will enable the organization to begin realizing a satisfactory ROI on these and prior investments.”

There are four major trends that are driving the adoption of next-generation analytical databases in 2018, according to Nima Negahban, CTO and cofounder of Kinetica. These include:

Trend #1 – Organizations Demand a Return on Their IoT investments

Companies continued to invest in IoT initiatives in 2017, but 2018 will be the year where IoT monetization becomes critical. While it is a good start for enterprises to collect and store IoT data, what is more meaningful is understanding it, analyzing it, and leveraging the insights to improve efficiency. The focus on location intelligence, predictive analytics, and streaming data analysis use cases will dramatically increase to drive a return on IoT investments.

Trend #2 – Enterprises Will Move from AI Science Experiments to Truly Operationalizing it

Enterprises have spent the past few years educating themselves on various AI frameworks and tools. But as AI goes mainstream, it will move beyond small-scale experiments to being automated and operationalized. As enterprises move forward with operationalizing AI, they will look for products and tools to automate, manage, and streamline the entire machine learning and deep learning life cycle. In 2018, investments in AI life cycle management will increase and technologies that house the data and supervise the process will mature.

Trend #3 – Beginning of the End for the Traditional Data Warehouse

The traditional data warehouse is struggling with managing and analyzing the volume, velocity, and variety of data. While in-memory databases have helped alleviate the problem to some extent by providing better performance, data analytics workloads continue to be more compute-bound. In 2018, enterprises will start to seriously re-think their traditional data warehousing approach and look at moving to next-generation databases either leveraging memory or advanced processors architectures (GPU, SIMD), or both.

Trend #4 – Building Safer Artificial Intelligence with Audit Trails

AI is increasingly getting used for applications like drug discovery or the connected car, and these applications can have a detrimental impact on human life if an incorrect decision is made. Detecting exactly what caused the final incorrect decision leading to a serious problem is something enterprises will start to look at in 2018. Auditing and tracking every input and every score that a framework produces will help with detecting the human-written code that ultimately caused the problem.

Tweet this: .@KineticaDB predicts #AI #IoT will drive demand for next-gen analytical #databases http://bit.ly/2lT1vB7

About Kinetica

Headquartered in San Francisco, Calif., Kinetica is the provider of the only GPU database to combine data warehouse, advanced analytics, visualizations, and is optimized for running machine learning and deep learning models. With Kinetica, users can simultaneously ingest, explore, analyze and visualize fast-moving, complex data within milliseconds to make critical decisions and find efficiencies, lower cost, generate new revenue and improve customer experience. Customers in verticals such as financial services, retail, healthcare, utilities and public sector use Kinetica for fast OLAP, convergence of AI and BI, and geospatial analytics. Amazon, Cisco, Dell, Google, HP, IBM, Microsoft, NVIDIA and Tableau are part of the Kinetica ecosystem of cloud, hardware, server and software partners. Investors include Canvas Ventures, Citi Ventures, GreatPoint Ventures, and Meritech Capital Partners. Learn more at www.kinetica.com.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Samsung Optimizes Premium Exynos 9 Series 9810 for AI Applications and Richer Multimedia Content

The new Exynos 9810 brings premium features with a 2.9GHz custom CPU, an industry-first 6CA LTE modem and deep learning processing capabilities

Samsung Electronics Co., Ltd., a world leader in advanced semiconductor technology, today announced the launch of its latest premium application processor (AP), the Exynos 9 Series 9810. The Exynos 9810, built on Samsung’s second-generation 10-nanometer (nm) FinFET process, brings the next level of performance to smartphones and smart devices with its powerful third-generation custom CPU, faster gigabit LTE modem and sophisticated image processing with deep learning-based software.

In recognition of its innovation and technological advancements, Samsung’s Exynos 9 Series 9810 has been selected as a CES 2018 Innovation Awards HONOREE in the Embedded Technologies product category and will be displayed at the event, which runs January 9-12, 2018, in Las Vegas, USA.

“The Exynos 9 Series 9810 is our most innovative mobile processor yet, with our third-generation custom CPU, ultra-fast gigabit LTE modem and, deep learning-enhanced image processing,” said Ben Hur, vice president of System LSI marketing at Samsung Electronics. “The Exynos 9810 will be a key catalyst for innovation in smart platforms such as smartphones, personal computing and automotive for the coming AI era.”

With the benefits of the industry’s most advanced 10nm process technology, the Exynos 9810 will enable seamless multi-tasking with faster loading and transition times between the latest mobile apps. The processor has a brand new eight-core CPU under its hood, four of which are powerful third-generation custom cores that can reach 2.9 gigahertz (GHz), with the other four optimized for efficiency. With an architecture that widens the pipeline and improves cache memory, single-core performance is enhanced two-fold and multi-core performance is increased by around 40 percent compared to its predecessor.

Exynos 9810 introduces sophisticated features to enhance user experiences with neural network-based deep learning and stronger security on the most advanced mobile devices. This cutting-edge technology allows the processor to accurately recognize people or items in photos for fast image searching or categorization, or through depth sensing, scan a user’s face in 3D for hybrid face detection. By utilizing both hardware and software, hybrid face detection enables realistic face-tracking filters as well as stronger security when unlocking a device with one’s face. For added security, the processor has a separate security processing unit to safeguard vital personal data such as facial, iris and fingerprint information.

The LTE modem in the Exynos 9810 makes it much easier to broadcast or stream videos at up to UHD resolution, or in even newer visual formats such as 360-degree video. Following the successful launch of the industry’s first 1.0 gigabits per second (Gbps) LTE modem last year, Samsung again leads the industry with the first 1.2Gbps LTE modem embedded in Exynos 9810. It’s also the industry’s first Cat.18 LTE modem to support up to 6x carrier aggregation (CA) for 1.2Gbps downlink and 200 megabits per second (Mbps) uplink. Compared to its predecessor’s 5CA, this new modem delivers more stable data transfers at blazing speed. To maximize the transfer rate, the modem supports a 4×4 MIMO (Multiple-Input, Multiple-Output) and 256-QAM (Quadrature Amplitude Modulation) scheme, and utilizes enhanced Licensed-Assisted Access (eLAA) technology.

Not only will multimedia experiences on mobile devices with Exynos 9810 be faster, but they will also be more immersive, thanks to a dedicated image processing and upgraded multi-format codec (MFC). With faster and more energy-efficient image and visual processing, users will see advanced stabilization for images and video of up to UHD resolution, real-time out-of-focus photography in high resolution and brighter pictures in low light with reduced noise and motion blur. The upgraded MFC supports video recording and playback at up to UHD resolution at 120 frames per second (fps). With 10-bit HEVC (high efficiency video coding) and VP9 support, the MFC can render 1,024 different tones for each primary color (red, green and blue). This translates to a vast 1.07 billion possibilities of colors, or 64 times the previous 8-bit color format’s 16.7 million. With a much wider color range and more accurate color fidelity, users will be able to create and enjoy highly immersive content.

The Exynos 9 Series 9810 is currently in mass production.

For more information about Samsung’s Exynos products, please visit http://www.samsung.com/exynos.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
HPC Today by The Editorial Team - 11M ago

31 vendors and research organizations now collaborating on developing standard parallel programming model

Austin, Texas — The OpenMP ARB, a group of leading hardware and software vendors and research organizations that create the OpenMP standard parallel programming specification, today announced that Cavium, Inc., a leading provider of semiconductor products that enable intelligent processing for enterprise, data center, cloud, service provider wired and wireless networking has joined as a new member.

“Cavium’s membership in the OpenMP ARB further highlights our strong belief in industry’s demand for parallel computing and the significance of the ARM Architecture,” said Avinash Sodani, Distinguished Engineer at Cavium. “Cavium’s strong product portfolio includes ThunderX2, a compelling server-class, multi-core ARMv8 CPU suited for the most demanding compute workloads. We look forward to working with other OpenMP members in furthering OpenMP standards to meet the challenges of the Exascale era.”

“We are pleased that Cavium, Inc. has joined the OpenMP ARB and will help strengthen OpenMP support in the ARM software ecosystem,” says Michael Klemm, CEO of the OpenMP ARB.

About OpenMP
The OpenMP ARB has a mission to standardize directive-based multi-language high-level parallelism that is performant, productive and portable. Jointly defined by a group of major computer hardware and software vendors, the OpenMP API is a portable, scalable model that gives parallel programmers a simple and flexible interface for developing parallel applications for platforms ranging from embedded systems and accelerator devices to multicore systems and shared-memory systems. The OpenMP ARB owns the OpenMP brand, oversees the OpenMP specification and produces and approves new versions of the specification. Further information can be found at http://www.openmp.org/.

About Cavium
Cavium, Inc. offers a broad portfolio of infrastructure solutions for compute, security, storage, switching, connectivity and baseband processing. Cavium’s highly integrated multi-core SoC products deliver software compatible solutions across low to high performance points enabling secure and intelligent functionality in Enterprise, Data center and Service Provider Equipment. Cavium processors and solutions are supported by an extensive ecosystem of operating systems, tools, application stacks, hardware reference designs and other products. Cavium is headquartered in San Jose, CA with design centers in California, Massachusetts, India, Israel, China and Taiwan. For more information, please visit http://www.cavium.com.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview