ML and AI
QuickStart JupyterLab with any dir
docker pull jupyter/scipy-notebook
# docker run --rm -p 8888:8888 -e JUPYTER_ENABLE_LAB=yes -v <YOU_LOCAL_WORKING DIRECTORY_PATH>:/home/jovyan/work jupyter/scipy-notebook
docker run --rm -p 8888:8888 -e JUPYTER_ENABLE_LAB=yes -v $PWD:/home/jovyan/work jupyter/scipy-notebook
Embeddings and Semantic Search
- RAG - Retrieval-Augmented Generation
- LLM
- Vectors
- Semantic Search
- Cosine Similarity
Apples
Sweetness | 78/100 |
---|---|
Acidity | 40/100 |
Juicyness | 10/100 |
Body | 33/100 |
Tannins | 77/100 |
cosine similarity
Ollama on mac
opt1. basic app option(easy)
- download app https://ollama.com/
- follow the steps
opt2 - build C++ and use custom model
- build https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#run-the-quantized-model
- download prebuil/quantisized model to your build dir
./models
, i.e. https://huggingface.co/TheBloke/Llama-2-7B-GGUF - run
./main -ngl 32 -m ./models/llama-2-7b.Q5_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "how are you doing today"
OpenCV on JetsonNano
import cv2
print("openCV version ====> ", cv2.__version__) # testing openCV VER
dispW=1280
dispH=960
flip=0
camSet='nvarguscamerasrc ! video/x-raw(memory:NVMM), width=3264, height=2464, format=NV12, framerate=21/1 ! nvvidconv flip-method='+str(flip)+' ! video/x-raw, width='+str(dispW)+', height='+str(dispH)+', format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink'
cam=cv2.VideoCapture(camSet)
while True:
ret, frame=cam.read()
cv2.imshow('nanoCam',frame)
cv2.moveWindow('nanoCam',0,0)
if cv2.waitKey(1)==ord('q'):
break
cam.release()
cv2.destroyAllWindows()
Video stream to web
https://maker.pro/nvidia-jetson/tutorial/streaming-real-time-video-from-rpi-camera-to-browser-on-jetson-nano-with-flask
Original backup with boxes, shapes, text
import cv2
print("openCV version ====> ", cv2.__version__) # testing openCV VER
# ==> RESOLUTION OPTIONS
# # =>OPT1 - XS
# dispW=320
# dispH=240
# # =>OPT1 - S
# dispW=480
# dispH=320
# =>OPT1 - M
dispW=640
dispH=480
# FLIP OUTPUT
flip=0
# CAM SETTINGS
camSet='nvarguscamerasrc ! video/x-raw(memory:NVMM), width=3264, height=2464, format=NV12, framerate=21/1 ! nvvidconv flip-method='+str(flip)+' ! video/x-raw, width='+str(dispW)+', height='+str(dispH)+', format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink'
# ==VIDEO CAPTURE MODES -FILE or -CAMERA
videoPathRead = '/home/anton/Videos/nvidiaJetsonStream/jetsonStream2.avi'
# =FROM CAMERA
cam=cv2.VideoCapture(camSet)
# =FROM FILE
# cam=cv2.VideoCapture(videoPathRead)
# ====> WRITE VIDEO TO FILE SETTINGS
# =FILEPATH
# =WRITING - UNCOMMENT THIS OR CHANGE VIDEOPATH WHEN READING FROM FILE - also ENABLE WRITING IN WINDOW 1
videoPathWrite = '/home/anton/Videos/nvidiaJetsonStream/jetsonStream.avi'
outVid=cv2.VideoWriter(videoPathWrite,cv2.VideoWriter_fourcc(*'XVID'),21,(dispW,dispH))
while True:
ret, frame=cam.read()
# rectangual-circle frames,corner,corner/radius, BGR, thickness(-1 for fill shape)
frame=cv2.rectangle(frame,(340,100),(400,170),(255,0,0),6)
frame=cv2.circle(frame,(340,240),50,(0,0,255),-1)
# Window 1 - DISPLAY AND WRITE!!!!!
cv2.imshow('nanoCam',frame)
# window position
cv2.moveWindow('nanoCam',0,0)
# ===> WRITE VIDEO
outVid.write(frame)
# # Window 2
# cv2.imshow('nanoCam2',frame)
# cv2.moveWindow('nanoCam2',705,0)
# # GREY MOD
# gray=cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
# # GREY - RESIZE
# graySmall=cv2.resize(gray,(320,240))
# # Window 3
# cv2.imshow('grayVideo',gray)
# cv2.moveWindow('grayVideo',0,520)
# # Window 4 -RESIZED - S
# cv2.imshow('grayVideo2',graySmall)
# cv2.moveWindow('grayVideo2',705,520)
# exit algorithm (WAITKEY: 1msec for no delay OR LAGGY 50sec wait to match 21 framerate roughly)
if cv2.waitKey(1)==ord('q'):
break
# CLEARING MEMORY
cam.release()
outVid.release()
cv2.destroyAllWindows()
Hardware
this book has example of microcontroller test (demo of books first 6 chapters) https://tinymlbook.files.wordpress.com/2020/01/tflite_micro_preview.pdf
Leave a comment