DeepSeek, new AI model from China, it's the new rival to the US AIs. It's a game changer: you can sign up today and it's open sourced! It's free.

How in the hell this pop up when I needed it? I'm an IT infrastructure guy so some of this stuff that's discussed here previously was foreign as hell to me. On my latest work project, I've worked with AI/ML specialist who have been able to break all this stuff down where I can understand. You can host your own ChatGPT style service using Ollama while downloading whatever learning language models (LLMs) to work with it. That way, you don't have to pay a monthly subscription for some of those services. Hell, I'll trying to create my own baseball player prediction hit app for the upcoming baseball season using this :lol:

WHAT???!!!
 
I may pm you as I would like some help to get started on a project. I'm setting up a local AI server for that will have enough gpu vram to run quite a few local llms like qwen 3.5 next gguf..

Ok, so I just set up gpt-oss 20b on my server and wow!

This is a really cool. I’m running through my terminal and I have using openwebui for the gui interface. It’s like have ChatGPT on my home server without the restrictions.

I’m about to buy a new video card because now it’s really on!
 
Ok, so I just set up gpt-oss 20b on my server and wow!

This is a really cool. I’m running through my terminal and I have using openwebui for the gui interface. It’s like have ChatGPT on my home server without the restrictions.

I’m about to buy a new video card because now it’s really on!

I'm lucky I had bought a previous gen threadripper a couple years ago.. I'm taking out my 3090ti and replacing it with a 9070xt and 9070 for 32 gig vram and a cost of only 25 watts more.. local interfaces have the capability to split the llm across several different gpus if necessary..
 
I'm lucky I had bought a previous gen threadripper a couple years ago.. I'm taking out my 3090ti and replacing it with a 9070xt and 9070 for 32 gig vram and a cost of only 25 watts more.. local interfaces have the capability to split the llm across several different gpus if necessary..

How much you giving for these graphics card, fam? :lol:

I’m in experimentation mode, but this is also going to be on my weather data server, so I thinking about extended capabilities.
 
Ok, so I just set up gpt-oss 20b on my server and wow!

This is a really cool. I’m running through my terminal and I have using openwebui for the gui interface. It’s like have ChatGPT on my home server without the restrictions.

I’m about to buy a new video card because now it’s really on!
let me holla at u dm
 
Back
Top