Physicist & YouTuber Sabine Hossenfelder on the demise of the Future of Humanity Institute by Deborah W.A. Foulkes
EA Greaterwrong Forum
by
2h ago
..read more
Visit website
Should SoGive publish more notes from calls with charities? by Sanjay
EA Greaterwrong Forum
by
3h ago
SoGive works with ma­jor donors. As part of our work, we meet with sev­eral (10-30 per year) char­i­ties, gen­er­ally ones recom­mended by eval­u­a­tors we trust, or (oc­ca­sion­ally) recom­mended by our own re­search. We learn a lot through these con­ver­sa­tions. This sug­gests that we might want to pub­lish our call notes so that oth­ers can also learn about the char­i­ties we speak with. Given that we take notes dur­ing the calls any­way, it might seem that it would be low cost for us to sim­ply pub­lish those. This would be de­cep­tive. There is a non-triv­ial time cost for us ..read more
Visit website
#186 – Why babies are born small in Uttar Pradesh, and how to save their lives (Dean Spears on the 80,000 Hours Podcast) by 80000_Hours
EA Greaterwrong Forum
by
5h ago
We just pub­lished an in­ter­view: Dean Spears on why ba­bies are born small in Ut­tar Pradesh, and how to save their lives. Listen on Spo­tify or click through for other au­dio op­tions, the tran­script, and re­lated links. Below are the epi­sode sum­mary and some key ex­cerpts. Epi­sode summary I work in a place called Ut­tar Pradesh, which is a state in In­dia with 240 mil­lion peo­ple. One in ev­ery 33 peo­ple in the whole world lives in Ut­tar Pradesh. It would be the fifth largest coun­try if it were its own coun­try. And if it were its own coun­try, you’d prob­a­bly know about its hu­m ..read more
Visit website
Manifund Q1 Retro: Learnings from impact certs by Austin
EA Greaterwrong Forum
by
7h ago
Man­i­fund is a philan­thropic startup that runs a web­site and pro­grams to fund awe­some pro­jects. From Jan­uary to now, we wrapped up 3 differ­ent pro­grams for im­pact cer­tifi­cates (aka ven­ture-style fund­ing for char­ity pro­jects): ACX Grants, Man­i­fold Com­mu­nity Fund, and the Chi­natalk es­say com­pe­ti­tion. Over­all, we’ve learned a lot and are happy with the pro­jects we’ve funded, but are less ex­cited by im­pact certs than be­fore — it’s been hard to get in­vestor in­ter­est, and we still haven’t found a use case where certs led to bet­ter fund­ing de­ci­sions. For the next ..read more
Visit website
Émile P. Torres’s history of dishonesty and harassment by anonymous-for-obvious-reasons
EA Greaterwrong Forum
by
11h ago
This is a cross-post and you can see the origi­nal here, writ­ten in 2022. I am not the origi­nal au­thor, but I thought it was good for more EAs to know about this. I am post­ing anony­mously for ob­vi­ous rea­sons, but I am a long­stand­ing EA who is con­cerned about Tor­res’s effects on our com­mu­nity. An in­com­plete summary Introduction This post com­piles ev­i­dence that Émile P. Tor­res, a philos­o­phy stu­dent at Leib­niz Univer­sität Han­nover in Ger­many, has a long pat­tern of con­cern­ing be­hav­ior, which in­cludes gross dis­tor­tion and falsifi­ca­tion, per­sis­tent ha­rass­ment ..read more
Visit website
ChatGPT: towards AI subjectivity by KrisDAmato
EA Greaterwrong Forum
by
13h ago
Ab­stract: Mo­ti­vated by the ques­tion of re­spon­si­ble AI and value al­ign­ment, I seek to offer a uniquely Fou­cauldian re­con­struc­tion of the prob­lem as the emer­gence of an eth­i­cal sub­ject in a dis­ci­plinary set­ting. This re­con­struc­tion con­trasts with the strictly hu­man-ori­ented pro­gramme typ­i­cal to cur­rent schol­ar­ship that of­ten views tech­nol­ogy in in­stru­men­tal terms. With this in mind, I prob­le­ma­tise the con­cept of a tech­nolog­i­cal sub­jec­tivity through an ex­plo­ra­tion of var­i­ous as­pects of ChatGPT in light of Fou­cault’s work, ar­gu­ing that cur­r ..read more
Visit website
AMA: Lewis Bollard, Program Director of Farm Animal Welfare at OpenPhil by tobytrem
EA Greaterwrong Forum
by
15h ago
This an­nounce­ment was writ­ten by Toby Trem­lett, but don’t worry, I won’t an­swer the ques­tions for Lewis. Lewis Bol­lard, Pro­gram Direc­tor of Farm An­i­mal Welfare at Open Philan­thropy, will be hold­ing an AMA on Wed­nes­day 8th of May. Put all your ques­tions for him on this thread be­fore Wed­nes­day (you can add ques­tions later, but he may not see them). Lewis leads Open Philan­thropy’s Farm An­i­mal Welfare Strat­egy, which you can read more about here. Open Philan­thropy has given over 400 grants in its Farm An­i­mal Welfare fo­cus area, rang­ing from $15,000 to sup­port an­i­ma ..read more
Visit website
Launching applications for AI Safety Careers Course India 2024 by varun_agr
EA Greaterwrong Forum
by
18h ago
An­nounc­ing open ap­pli­ca­tions for the AI Safety Ca­reers Course In­dia 2024! Ax­iom Fu­tures has launched its flag­ship AI Safety Ca­reers Course 2024 to equip emerg­ing tal­ent work­ing on In­dia with foun­da­tional knowl­edge in AI safety. Spread out across 8-10 weeks, the pro­gram will provide can­di­dates with key skills and net­work­ing op­por­tu­ni­ties to take their first step to­ward an im­pact­ful ca­reer in the do­main. Each week will cor­re­spond with a cur­ricu­lum mod­ule that can­di­dates will be ex­pected to com­plete, and dis­cuss with their co­hort dur­ing the fa­cil­i­ta ..read more
Visit website
More than 50% of EAs probably believe Enlightenment is real. This is a big deal right? by yanni kyriacos
EA Greaterwrong Forum
by
20h ago
About a week ago, Spencer Green­berg and I were de­bat­ing what pro­por­tion of Effec­tive Altru­ists be­lieve en­light­en­ment is real. Since he has a large au­di­ence on X, we thought a poll would be a good way to in­crease our con­fi­dence in our predictions Be­fore I share my com­men­tary, I think in hind­sight it would have been bet­ter to ask the ques­tion like this: ‘Do you be­lieve that awak­en­ing/​en­light­en­ment (which frees a per­son from most or all suffer­ing for ex­tended pe­ri­ods, like weeks at a time) is a real phe­nomenon that some peo­ple achieve (e.g., through med­i­ta­ti ..read more
Visit website
The Intentional Stance, LLMs Edition by Eleni_A
EA Greaterwrong Forum
by
1d ago
Cross­posted from LessWrong: https://​​www.less­wrong.com/​​posts/​​zjGh93nzTTMkHL2uY/​​the-in­ten­tional-stance-llms-edi­tion In memo­riam of Daniel C. Den­nett. tl;dr: I sketch out what it means to ap­ply Den­nett’s In­ten­tional Stance to LLMs. I ar­gue that the in­ten­tional vo­cab­u­lary is already ubiquitous in ex­per­i­men­ta­tion with these sys­tems there­fore what is miss­ing is the the­o­ret­i­cal frame­work to jus­tify this us­age. I aim to make up for that and ex­plain why the in­ten­tional stance is the best available ex­plana­tory tool for LLM be­hav­ior. Choos­ing Between Stance ..read more
Visit website

Follow EA Greaterwrong Forum on FeedSpot

Continue with Google
Continue with Apple
OR