Locally I only have these two models, so I can only compare it to Llama3 (7b). I have an Obsidian AI chat plugin that uses Ollama. I use it to add context to my notes, or to reword certain sentences, or give suggestions on certain things.
DeepSeek does feel a bit “smarter” than Llama3. I say “feel” because I don’t really have an objective way to benchmark and measure both of the model.
Then again it might no be “apples to apples” comparison. Llama3 is 7b and DeepSeek-r1 is 14b. I am just pleasantly surprise that I could run it comfortably on a MackBook M2 / 16GB RAM.
Yes I know that what I’m using is a distilled version of the model and it is not the full model. Yet I feel that it is performing better than Llama.
I love reading the part, it is like reading an inner monolog of a LLM 😅
I have yet to try the one online, their server is always busy for me. 🤷♂️
📰 News Alert:
🔖 Title: Czech National Bank governor to propose $7B Bitcoin reserve plan
🗓️ Published: Wed, 29 Jan 2025 07:50:06 +0000
📄 Summary: The Czech National Bank is considering investing up to 5% of its reserves in Bitcoin, which could mean acquiring $7.3 billion in BTC as part of its diversification strategy. This move could signal growing institutional interest in cryptocurrency as a hedge against traditional assets. #Bitcoin #CzechNationalBank
🔗 https://cointelegraph.com/news/czech-national-bank-governor-proposes-7b-bitcoin-reserve-plan?utm_source=rss_feed&utm_medium=rss&utm_campaign=rss_partner_inbound
🗞️ Source: Cointelegraph.com News