On NeurIPS’ High School Paper Track
David Stutz • A student's point of view
by David Stutz
6d ago
A short disclaimer is necessary before diving in: the below is a rather personal opinion on the subject — driven by my personal experiences in AI research. As such, it is not meant to blame, contradict or discredit anyone or anything. Instead it is an attempt to add color. I think the project track in question is rather specific; I am sure much thought has gone into it and NeurIPS will iterate on it in future instances of the conferences. This being said, I think many arguments raised on X are not necessarily about NeurIPS' decision to have such a track in specific. Instead, many arguments can ..read more
Visit website
Thoughts on Academia and Industry in Machine Learning Research
David Stutz • A student's point of view
by David Stutz
3w ago
Introduction By construction, a PhD has a clear end. Depending on the program, country and field, a PhD is supposed to be done within 3-6 years when it is usually awarded after an official defense of the research work. This is in contrast to most other careers and jobs, especially in industry but also in the public sector. Even though a PhD is often considered as a qualification for independent research and thereby acts as the entry to an academic career, it is commonly assumed that most PhD graduates do not continue in academia. This also matches my impression and surveys among PhD students i ..read more
Visit website
On the Utility of Conformal Prediction Intervals
David Stutz • A student's point of view
by David Stutz
2M ago
Ben Recht recently published some blog articles questioning the utility of prediction intervals and sets, especially as obtained using distribution-free, conformal methods. In this article, I want to add some color to the discussion given my experience with applying these methods in various settings. Let me start with the elephant in the room: do people actually want uncertainty estimates in general? If you ask people in academia or industry, the first answer is usually yes. Somehow, as researchers and engineers, we want to understand when the models we train "know" and when they do not know ..read more
Visit website
Vanderbilt Machine Learning Seminar Talk “Conformal Prediction under Ambiguous Ground Truth”
David Stutz • A student's point of view
by David Stutz
5M ago
Abstract Conformal Prediction (CP) allows to perform rigorous uncertainty quantification by constructing a prediction set $C(X)$ satisfying $\mathbb{P}_{agg}(Y \in C(X))\geq 1-\alpha$ for a user-chosen $\alpha \in [0,1]$ by relying on calibration data $(X_1,Y_1),...,(X_n,Y_n)$ from $\mathbb{P}=\mathbb{P}_{agg}^{X} \otimes \mathbb{P}_{agg}^{Y|X}$. It is typically implicitly assumed that $\mathbb{P}_{agg}^{Y|X}$ is the ``true'' posterior label distribution. However, in many real-world scenarios, the labels $Y_1,...,Y_n$ are obtained by aggregating expert opinions using a voting procedure, result ..read more
Visit website
PRECISE Seminar Talk “Evaluating and Calibrating AI Models with Uncertain Ground Truth”
David Stutz • A student's point of view
by David Stutz
5M ago
Abstract For safety, AI systems in health undergo thorough evaluations before deployment, validating their predictions against a ground truth that is assumed certain. However, this is actually not the case and the ground truth may be uncertain. Unfortunately, this is largely ignored in standard evaluation of AI models but can have severe consequences such as overestimating the future performance. To avoid this, we measure the effects of ground truth uncertainty, which we assume decomposes into two main components: annotation uncertainty which stems from the lack of reliable annotations, and in ..read more
Visit website
ArXiv Pre-Print “Evaluating AI Systems under Uncertain Ground Truth: a Case Study in Dermatology”
David Stutz • A student's point of view
by David Stutz
5M ago
Abstract For safety, AI systems in health undergo thorough evaluations before deployment, validating their predictions against a ground truth that is assumed certain. However, this is actually not the case and the ground truth may be uncertain. Unfortunately, this is largely ignored in standard evaluation of AI models but can have severe consequences such as overestimating the future performance. To avoid this, we measure the effects of ground truth uncertainty, which we assume decomposes into two main components: annotation uncertainty which stems from the lack of reliable annotations, and in ..read more
Visit website
Interviewed by AI Coffee Break with Letitia
David Stutz • A student's point of view
by David Stutz
5M ago
Letitia is a PhD student at Heidelberg University in the Natural Language Processing group working on vision-language models. On top of her research, she is running a YouTube channel covering Ai papers and developments. So it was a pleasure to be interviewed for her channel about my PhD research on adversarial robustness: The post Interviewed by AI Coffee Break with Letitia appeared first on David Stutz ..read more
Visit website
TMLR Paper “Conformal Prediction under Ambiguous Ground Truth”
David Stutz • A student's point of view
by David Stutz
6M ago
Abstract Conformal Prediction (CP) allows to perform rigorous uncertainty quantification by constructing a prediction set $C(X)$ satisfying $\mathbb{P}_{agg}(Y \in C(X))\geq 1-\alpha$ for a user-chosen $\alpha \in [0,1]$ by relying on calibration data $(X_1,Y_1),...,(X_n,Y_n)$ from $\mathbb{P}=\mathbb{P}_{agg}^{X} \otimes \mathbb{P}_{agg}^{Y|X}$. It is typically implicitly assumed that $\mathbb{P}_{agg}^{Y|X}$ is the ``true'' posterior label distribution. However, in many real-world scenarios, the labels $Y_1,...,Y_n$ are obtained by aggregating expert opinions using a voting procedure, result ..read more
Visit website
Benchmarking Bit Errors in Quantized Neural Networks with PyTorch
David Stutz • A student's point of view
by David Stutz
6M ago
Introduction I was planning to have an article series on experimenting with bit errors in quantized deep networks — similar to my article series on adversarial robustness — with accompanying PyTorch code. However, in light of the incredible recent progress in machine learning, I decided to focus on other projects. Nevertheless, I wanted to share the tutorial code I prepared with some pointers for those interested in quantization and bit error robustness. So, in this article, I want to share links to the code, some results and pointers to the relevant literature and background. Figure 1: Exam ..read more
Visit website
My Impressions (and Application) of the Heidelberg Laureate Forum 2023
David Stutz • A student's point of view
by David Stutz
7M ago
The Heidelberg Laureate Forum (HLF) invites laureates of the prime computer science (CS) and mathprizes alongside young researchers for a networking conference in beautiful Heidelberg. Specifically, there are six awards that can be considered the Nobel prizes in CS and math: the ACM Turing award, the ACM Prize in Computing, the Abel Prize, the Fields Medal, the Nevanlinna Prize and the IMU Abacus Medal. Each year, different laureates are invited and young researchers purusing their undergraduate, graduate or post-doc studies can apply to join them. Besides an amazing program, the forum is enti ..read more
Visit website

Follow David Stutz • A student's point of view on FeedSpot

Continue with Google
Continue with Apple
OR