MSI GeForce RTX 3060 Ventus 2X 12G OC - FreeBSD
MSI GeForce RTX 3060 Ventus 2X 12G OC - FreeBSD.
hardware info on main page.
Links
local links
History
2025-01-02: I created this page.
2024-11-08: pro4 - ollama - pull another model
tingo@locaal:~ $ ollama pull llama3.2:1b pulling manifest pulling 74701a8c35f6... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 1.3 GB pulling 966de95ca8a6... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 1.4 KB pulling fcc5a6bec9da... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 7.7 KB pulling a70ff7e570d9... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 6.0 KB pulling 4f659a1e86d7... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 485 B verifying sha256 digest writing manifest removing any unused layers success
show what we have
tingo@locaal:~ $ ollama list NAME ID SIZE MODIFIED llama3.2:1b baf6a787fdff 1.3 GB 30 seconds ago mistral:latest f974a74358d6 4.1 GB 17 minutes ago
try to run it
tingo@locaal:~ $ ollama run llama3.2:1b Error: Post "http://127.0.0.1:11434/api/chat": EOF
try again
tingo@locaal:~ $ ollama run llama3.2:1b >>> /set verbose Set 'verbose' mode.
that worked. Do a query
>>> why is the sky blue The sky appears blue to us because of a phenomenon called Rayleigh scattering, named after the British physicist Lord Rayleigh. He discovered that when sunlight enters Earth's atmosphere, it encounters tiny molecules of gases such as nitrogen and oxygen. These molecules scatter the light in all directions, but they scatter shorter (blue) wavelengths more than longer (red) wavelengths. This is because the smaller molecules are more effective at scattering the blue light. As a result, the blue light is distributed throughout the atmosphere, giving the sky its blue color. The amount of scattering that occurs depends on the altitude and density of the atmospheric gases. At sea level, where most people live, the sky appears blue due to Rayleigh scattering from the ground and higher altitudes. However, as you go higher into the atmosphere, the concentration of molecules decreases, and the shorter wavelengths (like blue and violet) are not scattered as much, which is why the sky appears more red or orange at higher altitudes. total duration: 3.16908962s load duration: 60.885314ms prompt eval count: 30 token(s) prompt eval duration: 316.333ms prompt eval rate: 94.84 tokens/s eval count: 198 token(s) eval duration: 2.74121s eval rate: 72.23 tokens/s
2024-11-08: pro4 - ollama - in the second terminal, I run the mistral model
tingo@locaal:~ $ ollama run mistral pulling manifest pulling ff82381e2bea... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 4.1 GB pulling 43070e2d4e53... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 11 KB pulling 491dfa501e59... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 801 B pulling ed11eda7790d... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 30 B pulling 42347cd80dc8... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 485 B verifying sha256 digest writing manifest removing any unused layers success Error: Post "http://127.0.0.1:11434/api/chat": EOF
ok, it didn't like that. In the first terminal, a lot happened
[GIN] 2024/11/08 - 21:31:57 | 200 | 34.56µs | 127.0.0.1 | HEAD "/" [GIN] 2024/11/08 - 21:31:57 | 404 | 171.688µs | 127.0.0.1 | POST "/api/show" time=2024-11-08T21:31:59.104+01:00 level=INFO source=download.go:175 msg="downloading ff82381e2bea in 42 100 MB part(s)" time=2024-11-08T21:32:42.117+01:00 level=INFO source=download.go:175 msg="downloading 43070e2d4e53 in 1 11 KB part(s)" time=2024-11-08T21:32:44.213+01:00 level=INFO source=download.go:175 msg="downloading 491dfa501e59 in 1 801 B part(s)" time=2024-11-08T21:32:46.419+01:00 level=INFO source=download.go:175 msg="downloading ed11eda7790d in 1 30 B part(s)" time=2024-11-08T21:32:48.441+01:00 level=INFO source=download.go:175 msg="downloading 42347cd80dc8 in 1 485 B part(s)" [GIN] 2024/11/08 - 21:32:52 | 200 | 54.830670798s | 127.0.0.1 | POST "/api/pull" [GIN] 2024/11/08 - 21:32:52 | 200 | 6.039215ms | 127.0.0.1 | POST "/api/show" time=2024-11-08T21:32:53.395+01:00 level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x1482cc0 gpu_count=1 time=2024-11-08T21:32:53.400+01:00 level=DEBUG source=sched.go:219 msg="loading first model" model=/home/tingo/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 time=2024-11-08T21:32:53.400+01:00 level=DEBUG source=memory.go:101 msg=evaluating library=vulkan gpu_count=1 available="[12.2 GiB]" time=2024-11-08T21:32:53.401+01:00 level=INFO source=sched.go:710 msg="new model will fit in available VRAM in single GPU, loading" model=/home/tingo/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 gpu=0 parallel=1 available=13142851584 required="4.8 GiB" time=2024-11-08T21:32:53.401+01:00 level=DEBUG source=gpu_bsd.go:111 msg=gpu_bsd.go::GetCPUMem::GetCPUMem total_memory=34242666496 free_memory=26264035328 free_swap=4069965824 time=2024-11-08T21:32:53.401+01:00 level=DEBUG source=server.go:101 msg="system memory" total="31.9 GiB" free="24.5 GiB" free_swap="3.8 GiB" time=2024-11-08T21:32:53.401+01:00 level=DEBUG source=memory.go:101 msg=evaluating library=vulkan gpu_count=1 available="[12.2 GiB]" time=2024-11-08T21:32:53.402+01:00 level=INFO source=memory.go:309 msg="offload to vulkan" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[12.2 GiB]" memory.required.full="4.8 GiB" memory.required.partial="4.8 GiB" memory.required.kv="256.0 MiB" memory.required.allocations="[4.8 GiB]" memory.weights.total="3.9 GiB" memory.weights.repeating="3.8 GiB" memory.weights.nonrepeating="105.0 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="185.0 MiB" time=2024-11-08T21:32:53.402+01:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2644925853/runners/cpu/ollama_llama_server time=2024-11-08T21:32:53.402+01:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2644925853/runners/cpu_avx/ollama_llama_server time=2024-11-08T21:32:53.402+01:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2644925853/runners/cpu_avx2/ollama_llama_server time=2024-11-08T21:32:53.402+01:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2644925853/runners/vulkan/ollama_llama_server time=2024-11-08T21:32:53.402+01:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2644925853/runners/cpu/ollama_llama_server time=2024-11-08T21:32:53.402+01:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2644925853/runners/cpu_avx/ollama_llama_server time=2024-11-08T21:32:53.402+01:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2644925853/runners/cpu_avx2/ollama_llama_server time=2024-11-08T21:32:53.402+01:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2644925853/runners/vulkan/ollama_llama_server time=2024-11-08T21:32:53.402+01:00 level=INFO source=server.go:393 msg="starting llama server" cmd="/tmp/ollama2644925853/runners/vulkan/ollama_llama_server --model /home/tingo/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --verbose --parallel 1 --port 36729" time=2024-11-08T21:32:53.402+01:00 level=DEBUG source=server.go:410 msg=subprocess environment="[PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/tingo/bin LD_LIBRARY_PATH=/tmp/ollama2644925853/runners/vulkan:/tmp/ollama2644925853/runners]" time=2024-11-08T21:32:53.405+01:00 level=INFO source=sched.go:445 msg="loaded runners" count=1 time=2024-11-08T21:32:53.405+01:00 level=INFO source=server.go:593 msg="waiting for llama runner to start responding" time=2024-11-08T21:32:53.405+01:00 level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=0 commit="unknown" tid="0x827f86000" timestamp=1731097973 INFO [main] system info | n_threads=6 n_threads_batch=-1 system_info="AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="0x827f86000" timestamp=1731097973 total_threads=12 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="11" port="36729" tid="0x827f86000" timestamp=1731097973 llama_model_loader: loaded meta data with 25 key-value pairs and 291 tensors from /home/tingo/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = Mistral-7B-Instruct-v0.3 llama_model_loader: - kv 2: llama.block_count u32 = 32 llama_model_loader: - kv 3: llama.context_length u32 = 32768 llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 8: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 2 llama_model_loader: - kv 11: llama.vocab_size u32 = 32768 llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 13: tokenizer.ggml.model str = llama llama_model_loader: - kv 14: tokenizer.ggml.pre str = default llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32768] = ["<unk>", "<s>", "</s>", "[INST]", "[... llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32768] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32768] = [2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 20: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 21: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 22: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 23: tokenizer.chat_template str = {{ bos_token }}{% for message in mess... llama_model_loader: - kv 24: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens cache size = 771 llm_load_vocab: token to piece cache size = 0.1731 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32768 llm_load_print_meta: n_merges = 0 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 7.25 B llm_load_print_meta: model size = 3.83 GiB (4.54 BPW) llm_load_print_meta: general.name = Mistral-7B-Instruct-v0.3 llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 781 '<0x0A>' llm_load_print_meta: max token length = 48 time=2024-11-08T21:32:53.698+01:00 level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server loading model" ggml_vulkan: Found 1 Vulkan devices: Vulkan0: NVIDIA GeForce RTX 3060 (NVIDIA) | uma: 0 | fp16: 1 | warp size: 32 llm_load_tensors: ggml ctx size = 0.27 MiB llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: CPU buffer size = 72.00 MiB llm_load_tensors: NVIDIA GeForce RTX 3060 buffer size = 3850.02 MiB time=2024-11-08T21:33:26.724+01:00 level=DEBUG source=server.go:638 msg="model load progress 0.02" time=2024-11-08T21:33:27.017+01:00 level=DEBUG source=server.go:638 msg="model load progress 0.11" time=2024-11-08T21:33:27.310+01:00 level=DEBUG source=server.go:638 msg="model load progress 0.24" time=2024-11-08T21:33:27.562+01:00 level=DEBUG source=server.go:638 msg="model load progress 0.44" time=2024-11-08T21:33:27.855+01:00 level=DEBUG source=server.go:638 msg="model load progress 0.60" time=2024-11-08T21:33:28.148+01:00 level=DEBUG source=server.go:638 msg="model load progress 0.80" time=2024-11-08T21:33:28.439+01:00 level=DEBUG source=server.go:638 msg="model load progress 0.99" llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: NVIDIA GeForce RTX 3060 KV buffer size = 256.00 MiB llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB llama_new_context_with_model: Vulkan_Host output buffer size = 0.14 MiB llama_new_context_with_model: NVIDIA GeForce RTX 3060 compute buffer size = 164.00 MiB llama_new_context_with_model: Vulkan_Host compute buffer size = 12.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 2 time=2024-11-08T21:33:28.732+01:00 level=DEBUG source=server.go:638 msg="model load progress 1.00" DEBUG [initialize] initializing slots | n_slots=1 tid="0x827f86000" timestamp=1731098008 DEBUG [initialize] new slot | n_ctx_slot=2048 slot_id=0 tid="0x827f86000" timestamp=1731098008 INFO [main] model loaded | tid="0x827f86000" timestamp=1731098008 DEBUG [update_slots] all slots are idle and system prompt is empty, clear the KV cache | tid="0x827f86000" timestamp=1731098008 DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=0 tid="0x827f86000" timestamp=1731098008 time=2024-11-08T21:33:28.985+01:00 level=INFO source=server.go:632 msg="llama runner started in 35.58 seconds" time=2024-11-08T21:33:28.985+01:00 level=DEBUG source=sched.go:458 msg="finished setting up runner" model=/home/tingo/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 [GIN] 2024/11/08 - 21:33:28 | 200 | 36.755545028s | 127.0.0.1 | POST "/api/chat" time=2024-11-08T21:33:28.985+01:00 level=DEBUG source=sched.go:462 msg="context for request finished" time=2024-11-08T21:33:28.985+01:00 level=DEBUG source=sched.go:334 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/tingo/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 duration=5m0s time=2024-11-08T21:33:28.985+01:00 level=DEBUG source=sched.go:352 msg="after processing request finished event" modelPath=/home/tingo/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 refCount=0 time=2024-11-08T21:38:28.985+01:00 level=DEBUG source=sched.go:336 msg="timer expired, expiring to unload" modelPath=/home/tingo/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 time=2024-11-08T21:38:28.985+01:00 level=DEBUG source=sched.go:355 msg="runner expired event received" modelPath=/home/tingo/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 time=2024-11-08T21:38:28.985+01:00 level=DEBUG source=sched.go:371 msg="got lock to unload" modelPath=/home/tingo/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 time=2024-11-08T21:38:29.089+01:00 level=DEBUG source=server.go:1050 msg="stopping llama server" time=2024-11-08T21:38:29.089+01:00 level=DEBUG source=server.go:1056 msg="waiting for llama server to exit" time=2024-11-08T21:38:29.601+01:00 level=DEBUG source=server.go:1060 msg="llama server stopped" time=2024-11-08T21:38:29.601+01:00 level=DEBUG source=sched.go:376 msg="runner released" modelPath=/home/tingo/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 time=2024-11-08T21:38:33.993+01:00 level=WARN source=sched.go:642 msg="gpu VRAM usage didn't recover within timeout" seconds=5.008431342 model=/home/tingo/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 time=2024-11-08T21:38:33.994+01:00 level=DEBUG source=sched.go:380 msg="sending an unloaded event" modelPath=/home/tingo/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 time=2024-11-08T21:38:33.994+01:00 level=DEBUG source=sched.go:303 msg="ignoring unload event with no pending requests" time=2024-11-08T21:38:34.273+01:00 level=WARN source=sched.go:642 msg="gpu VRAM usage didn't recover within timeout" seconds=5.287736371 model=/home/tingo/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 time=2024-11-08T21:38:34.548+01:00 level=WARN source=sched.go:642 msg="gpu VRAM usage didn't recover within timeout" seconds=5.562665772 model=/home/tingo/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435
2024-11-08: pro4 - ollama - in the first terminal, I start ollama server
tingo@locaal:~ $ file /usr/local/bin/ollama /usr/local/bin/ollama: ELF 64-bit LSB executable, x86-64, version 1 (FreeBSD), dynamically linked, interpreter /libexec/ld-elf.so.1, for FreeBSD 13.3, FreeBSD-style, Go BuildID=JLNZ20VZYvEn2LuhDl0-/VC-dnrH1y33jxyQLEOqg/Ghvn4ZvlqwJAVJFM5_jT/Avb_QBTK7vLqVySpN2Om, stripped
start it
tingo@locaal:~ $ OLLAMA_NUM_PARALLEL=1 OLLAMA_DEBUG=1 LLAMA_DEBUG=1 ollama start Couldn't find '/home/tingo/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINKXfpK4SGCPzaOsQvyLoCk9I+sBTqveh20aWCsBvCgv 2024/11/08 21:31:24 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/tingo/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-11-08T21:31:24.933+01:00 level=INFO source=images.go:782 msg="total blobs: 0" time=2024-11-08T21:31:24.934+01:00 level=INFO source=images.go:790 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullModelHandler-fm (5 handlers) [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateModelHandler-fm (5 handlers) [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushModelHandler-fm (5 handlers) [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyModelHandler-fm (5 handlers) [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteModelHandler-fm (5 handlers) [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowModelHandler-fm (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).ProcessHandler-fm (5 handlers) [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowModelHandler-fm (6 handlers) [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (5 handlers) [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (5 handlers) [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) time=2024-11-08T21:31:24.934+01:00 level=INFO source=routes.go:1172 msg="Listening on 127.0.0.1:11434 (version 0.0.0)" time=2024-11-08T21:31:24.934+01:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama2644925853/runners time=2024-11-08T21:31:24.934+01:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu file=build/bsd/x86_64/cpu/bin/ollama_llama_server.gz time=2024-11-08T21:31:24.934+01:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu_avx file=build/bsd/x86_64/cpu_avx/bin/ollama_llama_server.gz time=2024-11-08T21:31:24.934+01:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu_avx2 file=build/bsd/x86_64/cpu_avx2/bin/ollama_llama_server.gz time=2024-11-08T21:31:24.934+01:00 level=DEBUG source=payload.go:182 msg=extracting variant=vulkan file=build/bsd/x86_64/vulkan/bin/ollama_llama_server.gz time=2024-11-08T21:31:24.934+01:00 level=DEBUG source=payload.go:182 msg=extracting variant=vulkan file=build/bsd/x86_64/vulkan/bin/vulkan-shaders-gen.gz time=2024-11-08T21:31:25.011+01:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2644925853/runners/cpu/ollama_llama_server time=2024-11-08T21:31:25.011+01:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2644925853/runners/cpu_avx/ollama_llama_server time=2024-11-08T21:31:25.011+01:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2644925853/runners/cpu_avx2/ollama_llama_server time=2024-11-08T21:31:25.011+01:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2644925853/runners/vulkan/ollama_llama_server time=2024-11-08T21:31:25.011+01:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 vulkan]" time=2024-11-08T21:31:25.011+01:00 level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2024-11-08T21:31:25.011+01:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler" time=2024-11-08T21:31:26.170+01:00 level=INFO source=types.go:105 msg="inference compute" id=0 library=vulkan compute="" driver=0.0 name="" total="12.2 GiB" available="12.2 GiB"
2024-11-08: pro4 - pkg - install ollama
Nov 8 21:26:41 locaal pkg[6019]: vulkan-loader-1.3.297 installed Nov 8 21:26:42 locaal pkg[6019]: ollama-0.3.6_1 installed
messages
Message from ollama-0.3.6_1: -- You installed ollama: the AI model runner. To run ollama, plese open 2 terminals. 1. In the first terminal, please run: $ OLLAMA_NUM_PARALLEL=1 OLLAMA_DEBUG=1 LLAMA_DEBUG=1 ollama start 2. In the second terminal, please run: $ ollama run mistral This will download and run the AI model "mistral". You will be able to interact with it in plain English. Please see https://ollama.com/library for the list of all supported models. The command "ollama list" lists all models downloaded into your system. When the model fails to load into your GPU, please use the provided ollama-limit-gpu-layers script to create model flavors with different num_gpu parameters. ollama uses many gigbytes of disk space in your home directory, because advanced AI models are often very large. Pease symlink ~/.ollama to a large disk if needed.
2024-11-08: pro4 - nvidia - nvidia-smi report
root@locaal:~ # nvidia-smi Fri Nov 8 21:21:49 2024 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.120 Driver Version: 550.120 CUDA Version: N/A | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 3060 Off | 00000000:06:00.0 Off | N/A | | 32% 28C P0 33W / 170W | 1MiB / 12288MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | No running processes found | +-----------------------------------------------------------------------------------------+
2024-11-08: pro4 - Xorg - I manually loaded the kld module nvidia-drm from the console, then (as my own user) I did startx
. It worked,
from /var/log/Xorg.0.log:
[170183.798] (II) xfree86: Adding drm device (/dev/dri/card0) [170183.798] (II) Platform probe for /dev/dri/card0 [170183.798] (**) OutputClass "nvidia" ModulePath extended to "/usr/local/lib/nvidia/xorg,/usr/local/lib/xorg/modules,/usr/local/lib/xorg/modules" [170183.798] (**) OutputClass "nvidia" setting /dev/dri/card0 as PrimaryGPU [170183.798] (--) PCI:*(6@0:0:0) 10de:2504:1462:397d rev 161, Mem @ 0xfb000000/16777216, 0xd0000000/268435456, 0xe0000000/33554432, I/O @ 0x0000e000/128, BIOS @ 0x????????/65536 [170183.799] (II) Loading /usr/local/lib/xorg/modules/drivers/nvidia_drv.so [170183.799] (II) Module nvidia: vendor="NVIDIA Corporation" [170183.799] compiled for 1.6.99.901, module version = 1.0.0 [170183.799] Module class: X.Org Video Driver [170183.799] (II) NVIDIA dlloader X Driver 550.120 Fri Sep 13 09:34:46 UTC 2024 [170183.799] (II) NVIDIA Unified Driver for all Supported NVIDIA GPUs [170183.799] (--) Using syscons driver with X support (version 2.0) [170183.799] (--) using VT number 9 [170183.814] (WW) VGA arbiter: cannot open kernel arbiter, no multi-card support [170183.814] (II) NVIDIA(0): Creating default Display subsection in Screen section "Default Screen Section" for depth/fbbpp 24/32 [170183.814] (==) NVIDIA(0): Depth 24, (==) framebuffer bpp 32 [170183.814] (==) NVIDIA(0): RGB weight 888 [170183.814] (==) NVIDIA(0): Default visual is TrueColor [170183.814] (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) [170183.814] (II) Applying OutputClass "nvidia" options to /dev/dri/card0 [170183.814] (**) NVIDIA(0): Enabling 2D acceleration [170183.814] (II) Loading sub module "glxserver_nvidia" [170183.814] (II) LoadModule: "glxserver_nvidia" [170183.814] (II) Loading /usr/local/lib/xorg/modules/extensions/libglxserver_nvidia.so [170183.820] (II) Module glxserver_nvidia: vendor="NVIDIA Corporation" [170183.820] compiled for 1.6.99.901, module version = 1.0.0 [170183.820] Module class: X.Org Server Extension [170183.820] (II) NVIDIA GLX Module 550.120 Fri Sep 13 09:34:34 UTC 2024 [170183.820] (II) NVIDIA: The X server supports PRIME Render Offload. [170186.288] (--) NVIDIA(0): Valid display device(s) on GPU-0 at PCI:6:0:0 [170186.288] (--) NVIDIA(0): DFP-0 (boot) [170186.288] (--) NVIDIA(0): DFP-1 [170186.288] (--) NVIDIA(0): DFP-2 [170186.288] (--) NVIDIA(0): DFP-3 [170186.288] (--) NVIDIA(0): DFP-4 [170186.288] (--) NVIDIA(0): DFP-5 [170186.288] (--) NVIDIA(0): DFP-6 [170186.289] (II) NVIDIA(0): NVIDIA GPU NVIDIA GeForce RTX 3060 (GA106-A) at PCI:6:0:0 [170186.289] (II) NVIDIA(0): (GPU-0) [170186.289] (--) NVIDIA(0): Memory: 12582912 kBytes [170186.289] (--) NVIDIA(0): VideoBIOS: 94.06.2f.00.9a [170186.289] (II) NVIDIA(0): Detected PCI Express Link width: 16X [170186.334] (--) NVIDIA(GPU-0): BenQ GL2450H (DFP-0): connected [170186.334] (--) NVIDIA(GPU-0): BenQ GL2450H (DFP-0): Internal TMDS [170186.334] (--) NVIDIA(GPU-0): BenQ GL2450H (DFP-0): 600.0 MHz maximum pixel clock [170186.334] (--) NVIDIA(GPU-0): [170186.334] (--) NVIDIA(GPU-0): DFP-1: disconnected [170186.334] (--) NVIDIA(GPU-0): DFP-1: Internal DisplayPort [170186.334] (--) NVIDIA(GPU-0): DFP-1: 2670.0 MHz maximum pixel clock [170186.334] (--) NVIDIA(GPU-0): [170186.335] (--) NVIDIA(GPU-0): DFP-2: disconnected [170186.335] (--) NVIDIA(GPU-0): DFP-2: Internal TMDS [170186.335] (--) NVIDIA(GPU-0): DFP-2: 165.0 MHz maximum pixel clock [170186.335] (--) NVIDIA(GPU-0): [170186.335] (--) NVIDIA(GPU-0): DFP-3: disconnected [170186.335] (--) NVIDIA(GPU-0): DFP-3: Internal DisplayPort [170186.335] (--) NVIDIA(GPU-0): DFP-3: 2670.0 MHz maximum pixel clock [170186.335] (--) NVIDIA(GPU-0): [170186.336] (--) NVIDIA(GPU-0): DFP-4: disconnected [170186.336] (--) NVIDIA(GPU-0): DFP-4: Internal TMDS [170186.336] (--) NVIDIA(GPU-0): DFP-4: 165.0 MHz maximum pixel clock [170186.336] (--) NVIDIA(GPU-0): [170186.336] (--) NVIDIA(GPU-0): DFP-5: disconnected [170186.336] (--) NVIDIA(GPU-0): DFP-5: Internal DisplayPort [170186.336] (--) NVIDIA(GPU-0): DFP-5: 2670.0 MHz maximum pixel clock [170186.336] (--) NVIDIA(GPU-0): [170186.337] (--) NVIDIA(GPU-0): DFP-6: disconnected [170186.337] (--) NVIDIA(GPU-0): DFP-6: Internal TMDS [170186.337] (--) NVIDIA(GPU-0): DFP-6: 165.0 MHz maximum pixel clock [170186.337] (--) NVIDIA(GPU-0): 170186.381] (==) NVIDIA(0): No modes were requested; the default mode "nvidia-auto-select" [170186.381] (==) NVIDIA(0): will be used as the requested mode. [170186.381] (==) NVIDIA(0): [170186.383] (II) NVIDIA(0): Validated MetaModes: [170186.384] (II) NVIDIA(0): "DFP-0:nvidia-auto-select" [170186.384] (II) NVIDIA(0): Virtual screen size determined to be 1920 x 1080 [170186.424] (--) NVIDIA(0): DPI set to (92, 91); computed from "UseEdidDpi" X config [170186.424] (--) NVIDIA(0): option [170186.425] (II) NVIDIA: Reserving 24576.00 MB of virtual memory for indirect memory [170186.425] (II) NVIDIA: access. [170186.452] (II) NVIDIA(0): Setting mode "DFP-0:nvidia-auto-select" [170186.513] (==) NVIDIA(0): Disabling shared memory pixmaps [170186.513] (==) NVIDIA(0): Backing store enabled [170186.513] (==) NVIDIA(0): Silken mouse enabled [170186.513] (==) NVIDIA(0): DPMS enabled [170186.513] (WW) NVIDIA(0): Option "PrimaryGPU" is not used [170186.513] (II) NVIDIA(0): [DRI2] Setup complete [170186.513] (II) NVIDIA(0): [DRI2] VDPAU driver: nvidia [170186.513] (II) Initializing extension Generic Event Extension [170186.513] (II) Initializing extension SHAPE [170186.514] (II) Initializing extension MIT-SHM [170186.514] (II) Initializing extension XInputExtension [170186.514] (II) Initializing extension XTEST [170186.514] (II) Initializing extension BIG-REQUESTS [170186.514] (II) Initializing extension SYNC [170186.514] (II) Initializing extension XKEYBOARD [170186.515] (II) Initializing extension XC-MISC [170186.515] (II) Initializing extension SECURITY [170186.515] (II) Initializing extension XFIXES [170186.515] (II) Initializing extension RENDER [170186.515] (II) Initializing extension RANDR [170186.516] (II) Initializing extension COMPOSITE [170186.516] (II) Initializing extension DAMAGE [170186.516] (II) Initializing extension MIT-SCREEN-SAVER [170186.516] (II) Initializing extension DOUBLE-BUFFER [170186.516] (II) Initializing extension RECORD [170186.516] (II) Initializing extension DPMS [170186.517] (II) Initializing extension Present [170186.517] (II) Initializing extension DRI3 [170186.517] (II) Initializing extension X-Resource [170186.517] (II) Initializing extension XVideo [170186.517] (II) Initializing extension XVideo-MotionCompensation [170186.517] (II) Initializing extension GLX [170186.517] (II) Initializing extension GLX [170186.517] (II) Indirect GLX disabled. [170186.517] (II) GLX: Another vendor is already registered for screen 0 [170186.517] (II) Initializing extension XFree86-VidModeExtension [170186.518] (II) Initializing extension XFree86-DGA [170186.518] (II) Initializing extension XFree86-DRI [170186.518] (II) Initializing extension DRI2 [170186.518] (II) Initializing extension NV-GLX [170186.518] (II) Initializing extension NV-CONTROL [170186.518] (II) Initializing extension XINERAMA
2024-11-08: pro4 - pkg - install nvidia-drm-kmod
Nov 8 19:34:30 locaal pkg[5820]: drm-510-kmod-5.10.163_9 installed Nov 8 19:34:30 locaal pkg[5820]: nvidia-drm-510-kmod-550.120_1 installed Nov 8 19:34:30 locaal pkg[5820]: nvidia-drm-kmod-550.120 installed
messages
The drm-510-kmod port can be enabled for amdgpu (for AMD GPUs starting with the HD7000 series / Tahiti) or i915kms (for Intel APUs starting with HD3000 / Sandy Bridge) through kld_list in /etc/rc.conf. radeonkms for older AMD GPUs can be loaded and there are some positive reports if EFI boot is NOT enabled (similar to amdgpu). For amdgpu: kld_list="amdgpu" For Intel: kld_list="i915kms" For radeonkms: kld_list="radeonkms" Please ensure that all users requiring graphics are members of the "video" group. Modesetting must be enabled to use nvidia-drm.ko for graphics. This can be done by setting the modeset sysctl, the equivalent of the modeset kernel parameter on Linux. hw.nvidiadrm.modeset=1 This must be set before loading nvdidia-drm.ko, most easily done by placing the above in /boot/loader.conf. Modesetting must be enabled to use nvidia-drm.ko for graphics. This can be done by setting the modeset sysctl, the equivalent of the modeset kernel parameter on Linux. hw.nvidiadrm.modeset=1 This must be set before loading nvdidia-drm.ko, most easily done by placing the above in /boot/loader.conf.
2024-11-06: pro4 - pciconf -lv output
root@locaal:~ # pciconf -lv hostb0@pci0:0:0:0: class=0x060000 rev=0x00 hdr=0x00 vendor=0x1022 device=0x1450 subvendor=0x1022 subdevice=0x1450 vendor = 'Advanced Micro Devices, Inc. [AMD]' device = 'Family 17h (Models 00h-0fh) Root Complex' class = bridge subclass = HOST-PCI hostb1@pci0:0:1:0: class=0x060000 rev=0x00 hdr=0x00 vendor=0x1022 device=0x1452 subvendor=0x0000 subdevice=0x0000 vendor = 'Advanced Micro Devices, Inc. [AMD]' device = 'Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge' class = bridge subclass = HOST-PCI pcib1@pci0:0:1:3: class=0x060400 rev=0x00 hdr=0x01 vendor=0x1022 device=0x1453 subvendor=0x1022 subdevice=0x1453 vendor = 'Advanced Micro Devices, Inc. [AMD]' device = 'Family 17h (Models 00h-0fh) PCIe GPP Bridge' class = bridge subclass = PCI-PCI hostb2@pci0:0:2:0: class=0x060000 rev=0x00 hdr=0x00 vendor=0x1022 device=0x1452 subvendor=0x0000 subdevice=0x0000 vendor = 'Advanced Micro Devices, Inc. [AMD]' device = 'Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge' class = bridge subclass = HOST-PCI hostb3@pci0:0:3:0: class=0x060000 rev=0x00 hdr=0x00 vendor=0x1022 device=0x1452 subvendor=0x0000 subdevice=0x0000 vendor = 'Advanced Micro Devices, Inc. [AMD]' device = 'Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge' class = bridge subclass = HOST-PCI pcib6@pci0:0:3:1: class=0x060400 rev=0x00 hdr=0x01 vendor=0x1022 device=0x1453 subvendor=0x1022 subdevice=0x1453 vendor = 'Advanced Micro Devices, Inc. [AMD]' device = 'Family 17h (Models 00h-0fh) PCIe GPP Bridge' class = bridge subclass = PCI-PCI hostb4@pci0:0:4:0: class=0x060000 rev=0x00 hdr=0x00 vendor=0x1022 device=0x1452 subvendor=0x0000 subdevice=0x0000 vendor = 'Advanced Micro Devices, Inc. [AMD]' device = 'Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge' class = bridge subclass = HOST-PCI hostb5@pci0:0:7:0: class=0x060000 rev=0x00 hdr=0x00 vendor=0x1022 device=0x1452 subvendor=0x0000 subdevice=0x0000 vendor = 'Advanced Micro Devices, Inc. [AMD]' device = 'Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge' class = bridge subclass = HOST-PCI pcib7@pci0:0:7:1: class=0x060400 rev=0x00 hdr=0x01 vendor=0x1022 device=0x1454 subvendor=0x1022 subdevice=0x1454 vendor = 'Advanced Micro Devices, Inc. [AMD]' device = 'Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B' class = bridge subclass = PCI-PCI hostb6@pci0:0:8:0: class=0x060000 rev=0x00 hdr=0x00 vendor=0x1022 device=0x1452 subvendor=0x0000 subdevice=0x0000 vendor = 'Advanced Micro Devices, Inc. [AMD]' device = 'Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge' class = bridge subclass = HOST-PCI pcib8@pci0:0:8:1: class=0x060400 rev=0x00 hdr=0x01 vendor=0x1022 device=0x1454 subvendor=0x1022 subdevice=0x1454 vendor = 'Advanced Micro Devices, Inc. [AMD]' device = 'Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B' class = bridge subclass = PCI-PCI intsmb0@pci0:0:20:0: class=0x0c0500 rev=0x59 hdr=0x00 vendor=0x1022 device=0x790b subvendor=0x1849 subdevice=0xffff vendor = 'Advanced Micro Devices, Inc. [AMD]' device = 'FCH SMBus Controller' class = serial bus subclass = SMBus isab0@pci0:0:20:3: class=0x060100 rev=0x51 hdr=0x00 vendor=0x1022 device=0x790e subvendor=0x1849 subdevice=0xffff vendor = 'Advanced Micro Devices, Inc. [AMD]' device = 'FCH LPC Bridge' class = bridge subclass = PCI-ISA hostb7@pci0:0:24:0: class=0x060000 rev=0x00 hdr=0x00 vendor=0x1022 device=0x1460 subvendor=0x0000 subdevice=0x0000 vendor = 'Advanced Micro Devices, Inc. [AMD]' device = 'Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 0' class = bridge subclass = HOST-PCI hostb8@pci0:0:24:1: class=0x060000 rev=0x00 hdr=0x00 vendor=0x1022 device=0x1461 subvendor=0x0000 subdevice=0x0000 vendor = 'Advanced Micro Devices, Inc. [AMD]' device = 'Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 1' class = bridge subclass = HOST-PCI hostb9@pci0:0:24:2: class=0x060000 rev=0x00 hdr=0x00 vendor=0x1022 device=0x1462 subvendor=0x0000 subdevice=0x0000 vendor = 'Advanced Micro Devices, Inc. [AMD]' device = 'Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 2' class = bridge subclass = HOST-PCI hostb10@pci0:0:24:3: class=0x060000 rev=0x00 hdr=0x00 vendor=0x1022 device=0x1463 subvendor=0x0000 subdevice=0x0000 vendor = 'Advanced Micro Devices, Inc. [AMD]' device = 'Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 3' class = bridge subclass = HOST-PCI hostb11@pci0:0:24:4: class=0x060000 rev=0x00 hdr=0x00 vendor=0x1022 device=0x1464 subvendor=0x0000 subdevice=0x0000 vendor = 'Advanced Micro Devices, Inc. [AMD]' device = 'Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 4' class = bridge subclass = HOST-PCI hostb12@pci0:0:24:5: class=0x060000 rev=0x00 hdr=0x00 vendor=0x1022 device=0x1465 subvendor=0x0000 subdevice=0x0000 vendor = 'Advanced Micro Devices, Inc. [AMD]' device = 'Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 5' class = bridge subclass = HOST-PCI hostb13@pci0:0:24:6: class=0x060000 rev=0x00 hdr=0x00 vendor=0x1022 device=0x1466 subvendor=0x0000 subdevice=0x0000 vendor = 'Advanced Micro Devices, Inc. [AMD]' device = 'Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 6' class = bridge subclass = HOST-PCI hostb14@pci0:0:24:7: class=0x060000 rev=0x00 hdr=0x00 vendor=0x1022 device=0x1467 subvendor=0x0000 subdevice=0x0000 vendor = 'Advanced Micro Devices, Inc. [AMD]' device = 'Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 7' class = bridge subclass = HOST-PCI xhci0@pci0:1:0:0: class=0x0c0330 rev=0x01 hdr=0x00 vendor=0x1022 device=0x43d5 subvendor=0x1b21 subdevice=0x1142 vendor = 'Advanced Micro Devices, Inc. [AMD]' device = '400 Series Chipset USB 3.1 xHCI Compliant Host Controller' class = serial bus subclass = USB ahci0@pci0:1:0:1: class=0x010601 rev=0x01 hdr=0x00 vendor=0x1022 device=0x43c8 subvendor=0x1b21 subdevice=0x1062 vendor = 'Advanced Micro Devices, Inc. [AMD]' device = '400 Series Chipset SATA Controller' class = mass storage subclass = SATA pcib2@pci0:1:0:2: class=0x060400 rev=0x01 hdr=0x01 vendor=0x1022 device=0x43c6 subvendor=0x1b21 subdevice=0x0201 vendor = 'Advanced Micro Devices, Inc. [AMD]' device = '400 Series Chipset PCIe Bridge' class = bridge subclass = PCI-PCI pcib3@pci0:2:0:0: class=0x060400 rev=0x01 hdr=0x01 vendor=0x1022 device=0x43c7 subvendor=0x1b21 subdevice=0x3306 vendor = 'Advanced Micro Devices, Inc. [AMD]' device = '400 Series Chipset PCIe Port' class = bridge subclass = PCI-PCI pcib4@pci0:2:1:0: class=0x060400 rev=0x01 hdr=0x01 vendor=0x1022 device=0x43c7 subvendor=0x1b21 subdevice=0x3306 vendor = 'Advanced Micro Devices, Inc. [AMD]' device = '400 Series Chipset PCIe Port' class = bridge subclass = PCI-PCI pcib5@pci0:2:4:0: class=0x060400 rev=0x01 hdr=0x01 vendor=0x1022 device=0x43c7 subvendor=0x1b21 subdevice=0x3306 vendor = 'Advanced Micro Devices, Inc. [AMD]' device = '400 Series Chipset PCIe Port' class = bridge subclass = PCI-PCI re0@pci0:4:0:0: class=0x020000 rev=0x15 hdr=0x00 vendor=0x10ec device=0x8168 subvendor=0x1849 subdevice=0x8168 vendor = 'Realtek Semiconductor Co., Ltd.' device = 'RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller' class = network subclass = ethernet vgapci0@pci0:6:0:0: class=0x030000 rev=0xa1 hdr=0x00 vendor=0x10de device=0x2504 subvendor=0x1462 subdevice=0x397d vendor = 'NVIDIA Corporation' device = 'GA106 [GeForce RTX 3060 Lite Hash Rate]' class = display subclass = VGA hdac0@pci0:6:0:1: class=0x040300 rev=0xa1 hdr=0x00 vendor=0x10de device=0x228e subvendor=0x1462 subdevice=0x397d vendor = 'NVIDIA Corporation' device = 'GA106 High Definition Audio Controller' class = multimedia subclass = HDA none0@pci0:7:0:0: class=0x130000 rev=0x00 hdr=0x00 vendor=0x1022 device=0x145a subvendor=0x1022 subdevice=0x145a vendor = 'Advanced Micro Devices, Inc. [AMD]' device = 'Zeppelin/Raven/Raven2 PCIe Dummy Function' class = non-essential instrumentation none1@pci0:7:0:2: class=0x108000 rev=0x00 hdr=0x00 vendor=0x1022 device=0x1456 subvendor=0x1022 subdevice=0x1456 vendor = 'Advanced Micro Devices, Inc. [AMD]' device = 'Family 17h (Models 00h-0fh) Platform Security Processor (PSP) 3.0 Device' class = encrypt/decrypt xhci1@pci0:7:0:3: class=0x0c0330 rev=0x00 hdr=0x00 vendor=0x1022 device=0x145f subvendor=0x1849 subdevice=0xffff vendor = 'Advanced Micro Devices, Inc. [AMD]' device = 'Zeppelin USB 3.0 xHCI Compliant Host Controller' class = serial bus subclass = USB none2@pci0:8:0:0: class=0x130000 rev=0x00 hdr=0x00 vendor=0x1022 device=0x1455 subvendor=0x1022 subdevice=0x1455 vendor = 'Advanced Micro Devices, Inc. [AMD]' device = 'Zeppelin/Renoir PCIe Dummy Function' class = non-essential instrumentation ahci1@pci0:8:0:2: class=0x010601 rev=0x51 hdr=0x00 vendor=0x1022 device=0x7901 subvendor=0x1849 subdevice=0xffff vendor = 'Advanced Micro Devices, Inc. [AMD]' device = 'FCH SATA Controller [AHCI mode]' class = mass storage subclass = SATA hdac1@pci0:8:0:3: class=0x040300 rev=0x00 hdr=0x00 vendor=0x1022 device=0x1457 subvendor=0x1849 subdevice=0x4897 vendor = 'Advanced Micro Devices, Inc. [AMD]' device = 'Family 17h (Models 00h-0fh) HD Audio Controller' class = multimedia subclass = HDA
2024-11-06: pro4 - info from dmesg
CPU: AMD Ryzen 5 2600 Six-Core Processor (3400.20-MHz K8-class CPU) Origin="AuthenticAMD" Id=0x800f82 Family=0x17 Model=0x8 Stepping=2 Features=0x178bfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,MMX,FXSR,SSE,SSE2,HTT> Features2=0x7ed8320b<SSE3,PCLMULQDQ,MON,SSSE3,FMA,CX16,SSE4.1,SSE4.2,MOVBE,POPCNT,AESNI,XSAVE,OSXSAVE,AVX,F16C,RDRAND> AMD Features=0x2e500800<SYSCALL,NX,MMX+,FFXSR,Page1GB,RDTSCP,LM> AMD Features2=0x35c233ff<LAHF,CMP,SVM,ExtAPIC,CR8,ABM,SSE4A,MAS,Prefetch,OSVW,SKINIT,WDT,TCE,Topology,PCXC,PNXC,DBE,PL2I,MWAITX> Structured Extended Features=0x209c01a9<FSGSBASE,BMI1,AVX2,SMEP,BMI2,RDSEED,ADX,SMAP,CLFLUSHOPT,SHA> XSAVE Features=0xf<XSAVEOPT,XSAVEC,XINUSE,XSAVES> AMD Extended Feature Extensions ID EBX=0x1007<CLZERO,IRPerf,XSaveErPtr,IBPB> SVM: NP,NRIP,VClean,AFlush,DAssist,NAsids=32768 TSC: P-state invariant, performance statistics real memory = 68719476736 (65536 MB) avail memory = 33282408448 (31740 MB) Event timer "LAPIC" quality 600 ACPI APIC Table: <ALASKA A M I > FreeBSD/SMP: Multiprocessor System Detected: 12 CPUs FreeBSD/SMP: 1 package(s) x 2 cache groups x 3 core(s) x 2 hardware threads random: registering fast source Intel Secure Key RNG random: fast provider: "Intel Secure Key RNG" random: unblocking device. ioapic0 <Version 2.1> irqs 0-23 ioapic1 <Version 2.1> irqs 24-55 Launching APs: 10 8 9 11 5 2 3 1 7 4 6 aesni0: <AES-CBC,AES-CCM,AES-GCM,AES-ICM,AES-XTS,SHA1,SHA256> amdsmn0: <AMD Family 17h System Management Network> on hostb0 amdtemp0: <AMD CPU On-Die Thermal Sensors> on hostb0 ugen0.2: <Corsair Memory, Inc. RM-Series C-Link Adapter> at usbus0 nvidia0: <NVIDIA GeForce RTX 3060> on vgapci0 vgapci0: child nvidia0 requested pci_enable_io vgapci0: child nvidia0 requested pci_enable_io nvidia-modeset: Loading NVIDIA Kernel Mode Setting Driver for UNIX platforms 550.120 Fri Sep 13 09:32:47 UTC 2024 uhid0: <Corsair Memory, Inc. RM-Series C-Link Adapter, class 0/0, rev 2.00/0.03, addr 1> on usbus0
2024-10-27: m3a ucc - at this point, I figured out that the sata controller on this motherboard wasn't working properly, so I ordered a new motherboard.
2024-10-27: m3a ucc - Xorg - create a driver file for the nvidia card
root@locaal:~ # cat /usr/local/etc/X11/xorg.conf.d/20-nvidia.conf Section "Device" Identifier "Card0" Driver "nvidia" EndSection
and try with that. Hmm, startx doesn't work (I can still see the text cursor and mouse), and in /var/log/messages I see
Oct 27 12:59:16 locaal kernel: NVRM: GPU at PCI:0000:01:00: GPU-70b64976-c6df-2ede-7ca1-a3d91a693571 Oct 27 12:59:16 locaal kernel: NVRM: Xid (PCI:0000:01:00): 79, pid='<unknown>', name=<unknown>, GPU has fallen off the bus. Oct 27 12:59:16 locaal kernel: NVRM: GPU 0000:01:00.0: GPU has fallen off the bus. Oct 27 12:59:16 locaal kernel: NVRM: GPU 0000:01:00.0: RmInitAdapter failed! (0x41:0x40:2762) Oct 27 12:59:16 locaal kernel: nvidia0: NVRM: rm_init_adapter() failed! Oct 27 12:59:16 locaal kernel: NVRM: GPU 0000:01:00.0: RmInitAdapter failed! (0x22:0x56:890) Oct 27 12:59:16 locaal kernel: nvidia0: NVRM: rm_init_adapter() failed!
hmm, did I make my user a member of the video group?
root@locaal:~ # groups tingo tingo wheel operator
no - fix it
root@locaal:~ # pw groupmod video -m tingo root@locaal:~ # groups tingo tingo wheel operator video
2024-10-27: m3a ucc - nvidia-driver - test by kldload nvidia-modeset
on the console. The module loads, and this is from /var/log/messages
Oct 27 12:29:46 locaal kernel: nvidia0: <NVIDIA GeForce RTX 3060> on vgapci0 Oct 27 12:29:46 locaal kernel: vgapci0: child nvidia0 requested pci_enable_io Oct 27 12:29:46 locaal syslogd: last message repeated 1 times Oct 27 12:29:46 locaal kernel: nvidia-modeset: Loading NVIDIA Kernel Mode Setting Driver for UNIX platforms 550.120 Fri Sep 13 09:32:47 UTC 2024
so enable it
root@locaal:~ # sysrc kld_list+=nvidia-modeset kld_list: -> nvidia-modeset
2024-10-27: m3a ucc - pkg - install nvidia-driver
Oct 27 12:23:40 locaal pkg[1374]: libedit-3.1.20240808,1 installed Oct 27 12:23:40 locaal pkg[1374]: libXfixes-6.0.1 installed Oct 27 12:23:40 locaal pkg[1374]: libxml2-2.11.9 installed Oct 27 12:23:40 locaal pkg[1374]: lua53-5.3.6_1 installed Oct 27 12:23:40 locaal pkg[1374]: libepoll-shim-0.0.20240608 installed Oct 27 12:23:40 locaal pkg[1374]: libpciaccess-0.18.1 installed Oct 27 12:23:40 locaal pkg[1374]: libXrandr-1.5.4 installed Oct 27 12:23:40 locaal pkg[1374]: libXxf86vm-1.1.5 installed Oct 27 12:23:40 locaal pkg[1374]: libXdamage-1.1.6 installed Oct 27 12:23:41 locaal pkg[1374]: libdrm-2.4.123,1 installed Oct 27 12:23:41 locaal pkg[1374]: wayland-1.23.1 installed Oct 27 12:23:41 locaal pkg[1374]: libxshmfence-1.3.2 installed Oct 27 12:24:01 locaal pkg[1374]: llvm15-15.0.7_10 installed Oct 27 12:24:01 locaal pkg[1374]: libglvnd-1.7.0 installed Oct 27 12:24:02 locaal pkg[1374]: spirv-tools-2024.4.r1 installed Oct 27 12:24:02 locaal pkg[1374]: libXv-1.0.12_1,1 installed Oct 27 12:24:02 locaal pkg[1374]: spirv-llvm-translator-llvm15-15.0.5 installed Oct 27 12:24:03 locaal pkg[1374]: mesa-libs-24.1.7 installed Oct 27 12:24:03 locaal pkg[1374]: libfontenc-1.1.8 installed Oct 27 12:24:03 locaal pkg[1374]: libxkbfile-1.1.3 installed Oct 27 12:24:03 locaal pkg[1374]: libxcvt-0.1.2_2 installed Oct 27 12:24:03 locaal pkg[1374]: libXfont2-2.0.6 installed Oct 27 12:24:03 locaal pkg[1374]: xkbcomp-1.4.7 installed Oct 27 12:24:03 locaal pkg[1374]: libepoxy-1.5.10 installed Oct 27 12:24:03 locaal pkg[1374]: libudev-devd-0.6.0 installed Oct 27 12:24:03 locaal pkg[1374]: libunwind-20240221 installed Oct 27 12:24:03 locaal pkg[1374]: pixman-0.42.2 installed Oct 27 12:24:03 locaal pkg[1374]: xkeyboard-config-2.41_4 installed Oct 27 12:24:05 locaal pkg[1374]: mesa-dri-24.1.7 installed Oct 27 12:24:05 locaal pkg[1374]: egl-wayland-1.1.13 installed Oct 27 12:24:05 locaal pkg[1374]: xorg-server-21.1.13,1 installed Oct 27 12:24:12 locaal pkg[1374]: nvidia-driver-550.120 installed
message from nvidia-driver
To use these drivers, make sure that you have loaded the NVidia kernel module, by running # kldload nvidia-modeset on the command line, or by putting ``nvidia-modeset'' on the ``kld_list'' variable in /etc/rc.conf, either manually or by running # sysrc kld_list+=nvidia-modeset If you build this port with FreeBSD AGP GART driver, make sure you have agp.ko kernel module installed and loaded, since nvidia.ko will depend on it, or have your kernel compiled with "device agp". Otherwise, the NVidia kernel module will not load. Also, please set correct value for ``Option "NvAGP"'' in ``Device'' section of your X11 configuration file. When building with Linux compatibility support, make sure that linux.ko module is available as well (or have it compiled in kernel). It can be loaded via /boot/loader.conf, or later in the boot process if you add linux_enable="YES" to your /etc/rc.conf. If X.org cannot start and reports (EE) NVIDIA(0): Failed to obtain a shared memory identifier. in /var/log/Xorg.0.log while actually you have ``options SYSVSHM'' enabled in kernel, the sysctl ``kern.ipc.shmall'' should be increased. See /usr/local/share/doc/NVIDIA_GLX-1.0/README for more information.