LLama 2 RO 4Bits (super fast, 7Gb VRam)
Try it live at on Gradio (New page) : How it works ? As easy as 1,2,3 Load our 4bit mode more from huggingface , using the standard Rollama2 tokenizer. …
Our LLama 3 Quantized models
Download our quantized models on huggingface at https://huggingface.co/intelpen We have models generated with GPTQ – on 4 and 8 bits which were finetuned on wikitext. We have also Bits & …
Test our LLMs
This is a preview of our 8B LLM models finetuned on customer data. The models are quantized on 4bit (reducing the memory need from 48GB to ~6.5GB) and the last …
Data Science Projects
Optimization of the production of a large(>15k employees) steel producer in North-West Europe. Modelling of the whole production chain, from the reception of the raw materials in the port, transport …
Real-Time Object Detection and Tracking in Full HD videos
Deep learning Computer-Vision projects applied to real-time football games. Football players are detected in live real-time video streams, identified, tracked across the field and their actions and intensions detected. Statistics …
Introduction to DataScience
We offer a 3 days introductory training to DataScience. It contains 2 days of classes + hands-on examples and 1 day to develop and deploy a real-life project. The agenda …
Lidar for Autonomous Driving
https://www.linkedin.com/feed/update/urn:li:activity:6910544715713011712