2024 Motwani Postdoc Announced
Theory Dish Blog
by Omer Reingold
6M ago
The theory group at Stanford invites applications for the Motwani postdoctoral fellowship in theoretical computer science. Information and application instructions below. Applications will be accepted until the positions are filled, but review of applicants will begin after Dec 15. Website: https://academicjobsonline.org/ajo/jobs/26450 Email: theory.stanford@gmail.com ..read more
Visit website
Optimal Metric Distortion for Voting– A Proof from the Book
Theory Dish Blog
by Prasanna Ramakrishnan
7M ago
In this post, we’ll revisit the (deterministic) metric distortion conjecture in voting theory, which was recently proved by Gkatzelis, Halpern, and Shah [GHS20], and elegantly re-proved by Kempe and Kizilkaya [KK22]. The conjecture concerns the following question. Suppose we have an election with voters and candidates who lie in a metric space, but the only information we have is the voters’ ranking of the candidates in increasing order of distance. The cost of a candidate is their total distance to the voters. Can we design a voting rule that always selects a candidate whose cost is close to ..read more
Visit website
DNF Minimization, Part II
Theory Dish Blog
by Caleb Koch
8M ago
In the last post, I discussed the complexity of DNF minimization given a dataset. Specifically, given a dataset of input/output pairs, how hard is it to compute the smallest DNF consistent with ? In this post, we’ll look at a variant of this problem where one requires that the dataset specify a label for every point in . DNF truth table minimization. DNF truth table minimization is the variant of DNF minimization where the input dataset is a truth table for a function . The extra constraint on the dataset can only make the problem easier, and indeed using the greedy approximation for Set-Co ..read more
Visit website
DNF Minimization, Part I
Theory Dish Blog
by Caleb Koch
8M ago
Given a dataset consisting of input/output pairs, how do you find a small DNF consistent with the data? This problem is known as DNF minimization and has appeared in various forms throughout the history of computer science. In this two-part blog post, I’ll survey some results about the complexity of this problem and some connections to learning DNFs. History and Motivation. DNF minimization has been a central problem in the logic synthesis community for decades. Within this area, the problem is known as “two-level logic synthesis”. It has a rich history going back to a paper written by Quine i ..read more
Visit website
RANDOM & APPROX 2023
Theory Dish Blog
by Omer Reingold
1y ago
Guest post by the 2023 Program Committee Chairs: Nicole Megow (APPROX) and  Adam Smith (RANDOM) The 27th International Workshop on Randomization and Computation (RANDOM 2023) and the 26th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX 2023) will be held in person in Atlanta, GA, USA from September 11-13, 2023.   RANDOM 2023 focuses on applications of randomness to computational and combinatorial problems while APPROX 2023 focuses on algorithmic and complexity theoretic issues relevant to the development of efficient approximate solu ..read more
Visit website
2023 Motwani Postdoc Announced
Theory Dish Blog
by Omer Reingold
1y ago
The theory group at Stanford invites applications for the Motwani postdoctoral fellowship in theoretical computer science. Information and application instructions below. Applications will be accepted until the positions are filled, but review of applicants will begin after Dec 15. Website: https://academicjobsonline.org/ajo/jobs/23929 Email: theory.stanford@gmail.com ..read more
Visit website
9th TOCA-SV – 11/18
Theory Dish Blog
by Omer Reingold
1y ago
The 9th TOCA-SV day is Coming on Friday 11/18/22, in the Google campus in Mountain View. It is free but you need to register here, where you can also see an up-to-date list of talks and abstracts. Schedule (tentative): 0930-1000: Breakfast 1000-1015: Welcome 1015-1100: Gagan Aggarwal (Google) 1100-1145: Li-Yang Tan (Stanford) 1145-1245: Short talks I 1245-1400: Lunch (provided) and campus tour 1400-1445: Sandy Irani (Simons/UC Berkeley) 1445-1530: Kunal Talwar (Apple) 1530-1600: Coffee Break 1600-1730: Short talks II ..read more
Visit website
TCS Women Spotlight Workshop at STOC’22
Theory Dish Blog
by ccanonne
1y ago
The 5th TCS Women Spotlight Workshop, organized by Elena Grigorescu (Purdue), Barna Saha (UCSD), Virginia Vassilevska Williams (MIT), and Mary Wootters (Stanford), is happening on Monday, 8:45 am Rome Time (GMT+2), as part of STOC’22. To attend the workshop in person, just show up! You can also join online for free (no STOC registration required!), by just registering here. The event is open to all. The workshop features an inspiring series of speakers — including an Inspiration talk by Irit Dinur: “Expansion, PCPs, and high dimensional expansion, or How I re-proved the PCP theorem and how I h ..read more
Visit website
TCS Women Spotlight Workshop at STOC 2022: Travel grants and call for speaker nominations
Theory Dish Blog
by Omer Reingold
1y ago
From the orgenizers: =============== You are cordially invited to the TCS Women Spotlight Workshop! The workshop will be held on Monday, June 20, 2022, in Rome, Italy, as part of the 54th Symposium on Theory of Computing (STOC) and TheoryFest.  The workshop is open to all. More information about the workshop is available here:  https://sigact.org/tcswomen/ In particular, we would like to highlight the TCS Women Travel Scholarships (deadline April 25) and a call for nominations for Rising Stars talks at the workshop (deadline May 9).  More information below. Hope t ..read more
Visit website
Backpropagation ≠ Chain Rule
Theory Dish Blog
by Lunjia Hu
1y ago
The chain rule is a fundamental result in calculus. Roughly speaking, it states that if a variable is a differentiable function of intermediate variables , and each intermediate variable is itself a differentiable function of , then we can compute the derivative as follows: Besides being a handy tool for computing derivatives in calculus homework, the chain rule is closely related to the backpropagation algorithm that is widely-used for computing derivatives (gradients) in neural network training. This blog post by Boaz Barak is a beautiful tutorial on the chain rule and the backpropagatio ..read more
Visit website

Follow Theory Dish Blog on FeedSpot

Continue with Google
Continue with Apple
OR