Loading...

Follow Machine Learning - Reddit on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Good day! This is my first time going to such an event; and I was wondering if there were any event veterans here who could offer any advice as to how can I maximize learning in the said event.

P.S. This is the Singapore AI School : https://aisummerschool.aisingapore.org/

submitted by /u/patronus816
[visit reddit] [comments]
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

We all use Jupyter notebooks, Pytorch/Tensorflow/etc.

What libraries, plugins, IDE's or other things do you use that others could find useful?

submitted by /u/bergholma
[visit reddit] [comments]
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Say I am predicting smokers in my dataset, and from prior knowledge I know that 15% of adults in the U.S. are smokers. My end goal is to deploy my model into a database that has information on 200+ million adults in the U.S to find potential smokers for a marketing campaign.

For my modeling data, should I purposefully mimic the "distribution" of smokers in the U.S. and have 15% of the data be smokers, and 85% of the data be non-smokers? Most of my coworkers have said "it's easier to just balance them" but I believe this model would not be generalizable to the entire U.S. population if I keep it balanced.

submitted by /u/jambery
[visit reddit] [comments]
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I recently completed my bachelor's in Computer Science and have been working as a Machine Learning Engineer for the past 5 months. Now I'm looking to transition into a bigger company in a similar role.

Background:

I have a bachelor's degree in Computer Science from a mid-tier public university in the United States. During college, I participated in a summer REU in ML. I have co-authored 3 publications (two as second author) and published them at top AI conferences.

My initial plan was to go straight for a PhD in Machine Learning, but I was too ambitious with the schools I picked, and I ended up being rejected by all of them.

Luckily, in the meantime, I landed a job as a ML Engineer and I've been working in this position for the past 5 months. In the time that I've worked, I have realized that I like writing code and putting things into production slightly more than hardcore research. I like my job, but I'd like to transition into a bigger company with a more established Data Science/ML team.

Here's where I'd like to hear my fellow redditors' thoughts.

I'm debating whether I should consider doing a Master's in Machine Learning, get that degree, and then target the big companies? Or can I make up for the lack of an advanced degree through work experience?

As you all probably know, most of the job postings that I see expect the candidate to have a Master's degree as a minimum, or a PhD. I understand where they come from, as I'm aware that nothing can substitute the knowledge depth you gain by going through the rigor of grad school.

I was in the fast track master's program during my bachelor's, and I'm only 2 semesters away from getting my master's in C.S with a concentration in Data Science. The reason I didn't continue is because I knew for a fact that 1) I could learn more by working and 2) the quality of the coursework is not great. On the plus side of doing the master's, I have a good relationship with a few professors who are pretty involved in ML research, and I could do a research based master's with a thesis and boost my profile.

What are your thoughts?

submitted by /u/optimizedEater
[visit reddit] [comments]
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

So I have this idea for creating AGI based off of some things I've been reading from Jürgen Schmidhubers.

Meta-learn

Lets say you have a Neural Network (the parent network) which can design arbitrary children networks and learn optimized design patterns for a given task. Of course your parent network won't be super generalized for any type of network design, just relatively specific tasks. This is kind of why we don't have "real" AI or AGI. The tasks are still relatively narrow.

Skimming the literature on meta-learning it looks like researchers have been able to get SOME generalization by training their meta-networks on multiple tasks. But of course data and identifying tasks might be a limitation for scale-ability and high levels of generalization. So I purpose a potentially more elegant way.

PowerPlay

This is where Jürgen Schmidhubers PowerPlay would come in. The PowerPlay algorithm is split into a solver and a problem generator. The problem generator generates novel problems which the solver has to try to solve. Novel problems are problems which are unsolvable by the current solver. The created problems are just a bit more complicated than the most complicated solvable problem. The solver has to be able to solve all previous problems the generator created plus the new one.

Meta-PowerPlay

Both the problem Solver and Generator have parent Networks which continually learn to design more sophisticated Solvers and Generators until you have much more general problem solvers, or rather a neural network that can design general problem solvers.

Its kind of similar how GANs work for deepfakes and image problems work, the networks try to outsmart eachother in a feedbackloop but instead of just doing a faceswap, it can generate a general purpose neural networks. Or at least one that is a lot more general than what we currently have.

Of course for this to work, this also assumes that Jürgen Schmidhubers idea that intelligence is actually far simpler than we think and that it could be expressed in a relatively small function once we fully understand it. And therefore, the Meta-Solver will be able to derive this function and encapsulate it in its children.

submitted by /u/cryptonewsguy
[visit reddit] [comments]
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Hello guys!

I've just started working on a small project that involves analyzing a web cam generated image and compare it to images on a dataset folder to try and find a match.

There are some pre-trained models out there, which enables one-shot learning (like this GitHub for example: https://github.com/mohitwildbeast/Facial-Recognition-Using-FaceNet-Siamese-One-Shot-Learning)

However, it is not precise as I wanted, and I don't really know why.

I was looking the FaceNet model by David Sandberg (https://github.com/davidsandberg/facenet) and it seems promising, however I don't know how to use it for my case.

So, I was wondering if you guys have any advice for me, any link, that might help me!
The system should be simple, is just a proof of concept, so as long as the algorithm can compare the face it is detecting on the webcam, for example, to one on a images folder and return the embedded distance (distance between the faces, where smaller are similar faces and bigger otherwise).

I'm not sure if I was clear, as it is my first time writing on this sub.

Thank you all in advance.

submitted by /u/Berdas_
[visit reddit] [comments]
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Classification of long time series using residual convolutions and GRU layer.

Used data from physionet2017 challenge.

It seemed to me strange that participants extracted features by other methods, instead of giving this task to model itself.

A pretty simple example, but I hope someone finds it useful.

Github with code

submitted by /u/hadaev
[visit reddit] [comments]
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I'm attempting to wrap my pretrained BERT into an sklearn model with very limited success. I've managed to load input embeddings and labels using datasets, but I hit errors trying to pass more than one X to include the input masks and segment_ids. Any thoughts? Have people had success with this approach?

submitted by /u/dataOR
[visit reddit] [comments]
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Recent news coverage would have you believe that the computer generated art revolution has finally come. Fabian Offert argues that it has been here all along.

submitted by /u/hughbzhang
[visit reddit] [comments]
Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview