EA Greaterwrong Forum
1 FOLLOWERS
This community is dedicated to discussing Effective Altruism. Talk about the future of Altruism, the Mental health diagnostic system, Aesthetics as Epistemic Humility and more here.
EA Greaterwrong Forum
2h ago
EA Greaterwrong Forum
3h ago
SoGive works with major donors.
As part of our work, we meet with several (10-30 per year) charities, generally ones recommended by evaluators we trust, or (occasionally) recommended by our own research.
We learn a lot through these conversations. This suggests that we might want to publish our call notes so that others can also learn about the charities we speak with.
Given that we take notes during the calls anyway, it might seem that it would be low cost for us to simply publish those. This would be deceptive.
There is a non-trivial time cost for us ..read more
EA Greaterwrong Forum
5h ago
We just published an interview: Dean Spears on why babies are born small in Uttar Pradesh, and how to save their lives. Listen on Spotify or click through for other audio options, the transcript, and related links. Below are the episode summary and some key excerpts.
Episode summary
I work in a place called Uttar Pradesh, which is a state in India with 240 million people. One in every 33 people in the whole world lives in Uttar Pradesh. It would be the fifth largest country if it were its own country. And if it were its own country, you’d probably know about its hum ..read more
EA Greaterwrong Forum
7h ago
Manifund is a philanthropic startup that runs a website and programs to fund awesome projects. From January to now, we wrapped up 3 different programs for impact certificates (aka venture-style funding for charity projects): ACX Grants, Manifold Community Fund, and the Chinatalk essay competition.
Overall, we’ve learned a lot and are happy with the projects we’ve funded, but are less excited by impact certs than before — it’s been hard to get investor interest, and we still haven’t found a use case where certs led to better funding decisions. For the next ..read more
EA Greaterwrong Forum
11h ago
This is a cross-post and you can see the original here, written in 2022. I am not the original author, but I thought it was good for more EAs to know about this.
I am posting anonymously for obvious reasons, but I am a longstanding EA who is concerned about Torres’s effects on our community.
An incomplete summary Introduction
This post compiles evidence that Émile P. Torres, a philosophy student at Leibniz Universität Hannover in Germany, has a long pattern of concerning behavior, which includes gross distortion and falsification, persistent harassment ..read more
EA Greaterwrong Forum
13h ago
Abstract: Motivated by the question of responsible AI and value alignment, I seek to offer a uniquely Foucauldian reconstruction of the problem as the emergence of an ethical subject in a disciplinary setting. This reconstruction contrasts with the strictly human-oriented programme typical to current scholarship that often views technology in instrumental terms. With this in mind, I problematise the concept of a technological subjectivity through an exploration of various aspects of ChatGPT in light of Foucault’s work, arguing that curr ..read more
EA Greaterwrong Forum
15h ago
This announcement was written by Toby Tremlett, but don’t worry, I won’t answer the questions for Lewis.
Lewis Bollard, Program Director of Farm Animal Welfare at Open Philanthropy, will be holding an AMA on Wednesday 8th of May. Put all your questions for him on this thread before Wednesday (you can add questions later, but he may not see them).
Lewis leads Open Philanthropy’s Farm Animal Welfare Strategy, which you can read more about here. Open Philanthropy has given over 400 grants in its Farm Animal Welfare focus area, ranging from $15,000 to support anima ..read more
EA Greaterwrong Forum
18h ago
Announcing open applications for the AI Safety Careers Course India 2024!
Axiom Futures has launched its flagship AI Safety Careers Course 2024 to equip emerging talent working on India with foundational knowledge in AI safety. Spread out across 8-10 weeks, the program will provide candidates with key skills and networking opportunities to take their first step toward an impactful career in the domain. Each week will correspond with a curriculum module that candidates will be expected to complete, and discuss with their cohort during the facilita ..read more
EA Greaterwrong Forum
20h ago
About a week ago, Spencer Greenberg and I were debating what proportion of Effective Altruists believe enlightenment is real. Since he has a large audience on X, we thought a poll would be a good way to increase our confidence in our predictions
Before I share my commentary, I think in hindsight it would have been better to ask the question like this: ‘Do you believe that awakening/enlightenment (which frees a person from most or all suffering for extended periods, like weeks at a time) is a real phenomenon that some people achieve (e.g., through meditati ..read more
EA Greaterwrong Forum
1d ago
Crossposted from LessWrong: https://www.lesswrong.com/posts/zjGh93nzTTMkHL2uY/the-intentional-stance-llms-edition
In memoriam of Daniel C. Dennett.
tl;dr: I sketch out what it means to apply Dennett’s Intentional Stance to LLMs. I argue that the intentional vocabulary is already ubiquitous in experimentation with these systems therefore what is missing is the theoretical framework to justify this usage. I aim to make up for that and explain why the intentional stance is the best available explanatory tool for LLM behavior.
Choosing Between Stance ..read more