Reddit » Machine Learning
5,399 FOLLOWERS
Reddit is a network of communities based on people's interests. This subreddit holds all Machine Learning discussions. Research, projects, news; discuss anything ML related here.
Reddit » Machine Learning
3h ago
Word embeddings like Word2Vec and GloVe have revolutionized natural language processing, offering compact and dense representations of word meanings. However, these embeddings typically represent words as real-valued vectors, potentially limiting their ability to capture complex semantic relationships. In this proposal, we explore an alternative approach: representing word vectors as complex numbers.
We propose converting Word2Vec or GloVe vectors into complex numbers, where the real part captures magnitude and the imaginary part encodes additional semantic information. For instance, consider ..read more
Reddit » Machine Learning
6h ago
Hi, does anyone know of a French L2 GEC dataset (that was published at a conference)?
submitted by /u/R-e-v-e-r-i-e-
[visit reddit] [comments ..read more
Reddit » Machine Learning
8h ago
My primary expertise is audio processing, but i believe this task happens in other domains too: running a model on chunks of infinitely long input. while for some architectures it is straightforward, it can get tedious for convolutional nets. I put together a comprehensive tutorial how to build a streaming ML applications: https://balacoon.com/blog/streaming_inference/. would be curious to learn wether its a common problem and how do people usually deal with it. because generally resources on the topic are surprisingly scarce.
submitted by /u/clementruhm
[visit reddit] [comments ..read more
Reddit » Machine Learning
8h ago
I'm planning to dive into ML and I'd like to specialize in a special field.
What are the promising subfields of ML and which ones are high demand?
submitted by /u/Dramatic_Chance9577
[visit reddit] [comments ..read more
Reddit » Machine Learning
8h ago
https://preview.redd.it/jpiyt4b9yhwc1.png?width=1165&format=png&auto=webp&s=95d80f8f9c9241d722717ad25215be4077d541ca
Based on the MSE looks good right? But why is my R^2 starting off so negative and approaching 0? Could it be a bug in how i am calculating it?
This happened after i min maxed the labels before training.
This is an LSTM that is predicting runs scored for baseball games.
submitted by /u/Cloverdover1
[visit reddit] [comments ..read more
Reddit » Machine Learning
12h ago
Hello Everyone,
I am trying to do a small fraud detection project and i have so imbalanced dataset. I used randomundersampling because minority class is pretty small and i also tried smote or combining with smote best recall score i got, was with only randomundersampling(0.95). I thought GridsearchCV to increase it but instead of increasing, it is decreasing although i tried to make it to focus on recall score. Why this is happening?
submitted by /u/Legal_Hearing555
[visit reddit] [comments ..read more
Reddit » Machine Learning
12h ago
Hello, I am trying to model nitrate concentrations in the streams in Bavaria in Germany using Random Forest model. I am using Python and primarily sklearn for the same. I have data from 490 water quality stations. I am following the methodology in the paper from LongzhuQ.Shen et al which can be found here: https://www.nature.com/articles/s41597-020-0478-7
I want to split my dataset into training and testing set such that the spatial distribution of data in both sets is identical. The idea is that if data splitting ignores the spatial distribution, there is a risk that the training set mig ..read more
Reddit » Machine Learning
14h ago
Links:
https://www.snowflake.com/blog/arctic-open-efficient-foundation-language-models-snowflake/
https://replicate.com/snowflake/snowflake-arctic-instruct
submitted by /u/topcodemangler
[visit reddit] [comments ..read more
Reddit » Machine Learning
14h ago
This is a prompt I entered into MS Copilot (GPT4 Turbo).
It's in german but it just means "Would there be any disadvantages if I took the full bath first?"), so this can't be another SolidGoldMagikarp or similar, because the words clearly were in both tokenizer and training vocab.
Why would such a simple sentence cause this? Any guesses? (also tried with Claude Opus and LLama 3 70b, which worked fine)
https://preview.redd.it/9x6mva7b6gwc1.png?width=1129&format=png&auto=webp&s=bb6ac52d1c52d981161e8a864c5d1dd3794ca392
submitted by /u/michael-relleum
[visit reddit] [comments ..read more
Reddit » Machine Learning
15h ago
Hi All,
I am working on a project where I want to create speaker-aware transcripts from audios/videos, preferably using open-source solutions. I have tried so many approaches but nothing seems to work good enough out of the box.
I have tried:
whisperX: https://github.com/m-bain/whisperX (uses pyannote)
whisper-diarization: https://github.com/MahmoudAshraf97/whisper-diarization (uses Nemo)
AWS Transcribe
AssemblyAI API
Picovoice API
I'll need to dig deeper and understand what's causing the incorrect diarization but I am looking for suggestions to improve speaker diarization. Plea ..read more