Anyone into machine learning?

so im just getting into machine learning… like green af. ive been reading a bunch over the last few days on tf, pytorch, keras, etc…

I have a workstation that i happened to pick up for a steal of a deal and i want to get into machine learning with it for a recycling idea that i have. but i honestly have absolutely no idea where to start with it. other than that all i know, is that its an absolute must that i incorporate machine learning into my idea.

if anyone is wondering what my workstation is:

CPU: tr pro 3995wx
GPU: asrock 6800xt
MB: asus wrx80e
RAM: 64GB DDR4 2666mhz (non-ecc)

i know that i need to replace my gpu with at least an a6000 but more ideally an a100/h100 (crazy expensive i know), cause amd cards are garb for ML, and that ill need more ram and ideally ecc


That’s quite the box. Good find.

Find a problem, or make one up. Solve it. Wash rinse repeat.

ML is a very broad field. Computer vision and text analysis and time series analysis have little in common with eachother. Everyone’s got a hard on for neural nets, and they’re a useful tool, but I think one that people reach for too early many times, outside of specific use cases where they are the current best tool for the job - CV/image processing specifically.

I’ve never done much vision work, so have never really played with video card parallel processing. You can do a lot without the brute force of CUDA if you learn to chunk your data and multithread your operations.

I personally think that time series analysis and forecasting is one of the areas with the most real-world leverage but I’m just some guy who doesn’t really know what he’s doing.

If you want to learn, the Statistical Learning online course is excellent, as the one by Andrew Ng that was on Coursera at one point.

Some enjoy kaggle and similar, though I’ve never had the time/inclination to play with such things myself. Though I’ve heard good things.


i think i’ve found a few along the way actually. i’ve got some good ideas on how to solve them too, just a matter of implementation.

dauntingly broad

so i’m assuming these would be three different algos/processess/databases - honestly not sure what to consider them.

a big part of my idea involves image processing

i only just did some reading today on parallel processing… there’s so much to learn in this space. that’s interesting of you to say though, cause basically everywhere i’ve looked everyone is of the same opinion that nvidia is a necessity… but then i pose the question, why and how is AMD still in the ML space and selling cards?

i looked up what time series analysis is… wow, actually super cool stuff that i can implement with that kind of data…

i’m definitely gunna check that stuff out. i wanna at least be able to build some sort of working prototype… so i can bring it to people better than i to convince them to join my team

1 Like

I’m into machine learning. I’ve watched a few courses on Youtube, but I’ve never really tried it out. I think Deepmind makes all their projects public. So you might be able to look at the way they set up their neural networks to make your own.

1 Like

If you want to learn machine learning , you should start by downloading cifar10 or the handwritten numbers database and also download / checkout a tensorflow project that will already process the database. Examine the code, build your own CNN and interject it.

If you actually want to understand what youre doing then you should probably look to real academia , aka go to university for computer science.


They’re different branches/focuses of ML. When I first started playing in the field it seemed generally accepted that there were 5 main branches or fields, but now the internet seems to argue for between 3 and 14.

If you’re looking at doing image processing, that’s mostly correct as far as I know.

Since you’re so new to the subject and want to get into image processing/recognition, I very strongly recommend Andrew Ng’s course. It will give you a decent understanding of what’s actually happening with neural nets.

1 Like

It’s been a while since I’ve looked into any GitHub repos for cifar10, there’s a ton!

What I would Is download the above project and run it. Understand the pieces and flow. Then see if you can improve the results by modifying parameters or even building a novel CNN. This repo got to 79% which for cifar10 is not great so you have a lot of room to improve.

Thankfully you can use tensorflow and other libraries with zero understanding of matrix multiplications (linear algebra) or chain rule derivatives (calculus). But if you want to understand, which will always allow you to be more proficient, you do need to understand the math.

1 Like

Nvidia 1000% is not a necessity but obviously it is if you want to use cuda. You could argue that’s the easier path forward, and with that I’d agree.

Edited to add, if you need a gpu for machine learning look for used server cards with no video output. They’re dirt cheap as Silicon Valley updates. You don’t need video out for a DL card.

1 Like

Agreed to all points above. Use a tutorial and google cloud w/ the vision api. I created an android app that you could take a picture and it would tell you what kind of Fish it was. It was based on a simple tutorial like this one: Computer Vision System Tutorial with Google Cloud & Gravio -

1 Like

I used to work in autonomous vehicle development. I wasn’t on the software side of things, but it sparked an interest. I see so much potential in a myriad of industries.

In automating manufacturing processes especially, there’s so much that it could help with. Auditing prints, identifying defects, the list is endless.

I’d love to see it used in dry sifting melt. Make a program that takes a camera feed from a scope, paired with an automated dry sifting machine. When the camera detects a certain purity of heads it would trigger different actions from the machine… if that makes sense.


The hardest part in implementing high speed ai is processing the inputs fast enough. You almost always need to design a fpga to take to asic to accomplish this. Also creating subsets of data and defining them is quite a task. Good luck!

2 Likes for a quick glance at what’s being done across several fields: audio, business, etc

Not affiliated with us here, but I saw it on HN a few days ago.

Most of these aren’t done well, but maybe you’ll get some inspiration!

1 Like

nah, the eventual goal would be to hire someone who knows far better than i

i was looking at just buying a new/used a6000/a100 actually. my motherboard, afaik, doesnt have any SXM connections on it; if my understanding is correct, most server cards are SXM slots where i need PCIe

lol from 5 to 3-14… i’m gunna have to looking into what arguments are for classifying so few vs so many. seems strange to me that there isn’t an agreed upon standard with subsets within those groups or something along those lines

kind of part of the idea of what im trying to do but with waste lol

i just started watching a series of videos the other day on google cloud actually

I first started learning about ML in 2014 or thereabouts. It was much less of a popular field at that point, and the people who had strong opinions on things tended to have letters after their name and deep domain experience.

It’s not unreasonable to split things into the type of learning the sustem does: supervised, unsupervised, and reinforcement. And then hybrids thereof.

The 5 main application branches I seem to remember were computer vision/image processing, Natural Language Processing, time series analysis and predictive modeling, sentiment analysis, and speech recognition.

99% of my work has been done in the time series domain.

You can draw the buckets however you like. As in so many things, the trick likely lies in understanding that the buckets are somewhat arbitrary and represent subsets of a larger whole.


ahh yes, ive heard these three types mentioned in a number of different videos, though ive yet to hear of more

especially since in the coming years i’m sure there will be a whole lot more different buckets as people see/understand the possibilities of incorporating ML/DL

I’m not familiar with SXM, I’m talking about pci-e cards.

Look into Tesla K40s or K80s, for example. The k40 is the original titan or something like that but with no video out. These gpus are available for dirt cheap relative to their new price. K80s have 24 gb of memory… !

Be mindful of the power connectors though. K80 takes an 8 pin cpu pin, for example, which is atypical.

1 Like

So comparing the rendering rates - as long as I’ve understood much, it’s a no brainer to go and get an a6000 or ideally an a100 as the h100 isn’t currently attainable. Any suggestions on where I might find some company offloading a100s?