9906871f
Merge pull request #714 from ksanman/infinite-context by
2024-05-04 22:17:15 +0100
0bbbf171
(refs/pull/714/head)
Refactor executors by
2024-05-02 23:30:16 -0600
46a9d603
Add method to get BOS token. by
2024-05-02 23:29:33 -0600
d4cc1223
(refs/pull/688/head)
Merge branch 'master' of github.com:AsakusaRinne/LLamaSharp into auto_download by
2024-05-03 09:42:37 +0800
9d977b6e
Update LLama/Native/Load/UnknownNativeLibrary.cs by
2024-05-03 09:13:17 +0800
61d143d8
Implement context shifting in executor base by
2024-05-01 22:39:12 -0600
a86f14d1
Add an API to get the loaded native library. by
2024-05-02 07:44:25 +0800
1d99204f
(refs/pull/634/merge)
Merge 96bf214427 into 6bf010d719 by
2024-05-01 10:30:20 +0700
6bf010d7
Merge pull request #689 from zsogitbe/master by
2024-05-01 01:52:43 +0800
54c01d4c
(refs/pull/689/head)
Making old code obsolete - SemanticKernel: Correcting working with PromptExecutionSettings by
2024-04-30 19:28:31 +0200
0c770a52
Merge pull request #671 from kidkych/feature/interactive-sk-chatcompletion by
2024-05-01 01:02:25 +0800
16141adc
Merge pull request #711 from Norne9/master by
2024-05-01 01:00:02 +0800
f55222bc
Fixed `bool` marshalling by
2024-04-30 16:10:08 +0100
2d09c86f
Fixed spelling by
2024-04-30 15:29:55 +0100
9a069bd7
Updated submodule by
2024-04-30 15:14:44 +0100
25af52ac
Implemented string KV overrides by
2024-04-30 15:13:40 +0100
7b03e735
Merge pull request #709 from AsakusaRinne/format_check_ci by
2024-04-30 12:03:42 +0800
8746aa52
Updated binaries. - llama.cpp: b8c1476e44 - Build action: https://github.com/SciSharp/LLamaSharp/actions/runs/8886754252/job/24400788991 by
2024-04-30 02:29:41 +0100
5c60e6d4
(refs/pull/711/head)
Merge pull request #1 from Norne9/Norne9-patch-chat-session by
2024-04-30 02:39:07 +0300
ad9bf1cb
InitializeSessionFromHistoryAsync changed by
2024-04-30 02:32:14 +0300
33d5677c
(refs/pull/709/head)
Add editorconfig file for code format. by
2024-04-30 00:00:35 +0800
f44c8846
Merge pull request #710 from AsakusaRinne/typo_check_ci by
2024-04-29 23:31:52 +0800
495177fd
(refs/pull/710/head)
fix: typos. by
2024-04-29 18:19:20 +0800
de31a06a
ci: add workflow to check the spelling. by
2024-04-29 18:07:13 +0800
98909dc2
Merge pull request #708 from AsakusaRinne/llama3_support by
2024-04-29 10:36:19 +0800
4c078a75
Merge pull request #703 from martindevans/llava_async_load by
2024-04-28 22:38:21 +0100
175b25d4
(refs/pull/708/head)
Add LLaMA3 chat session example. by
2024-04-29 04:12:19 +0800
377ebf36
(refs/pull/703/head)
- Added `LoadFromFileAsync` method for `LLavaWeights` - Fixed checking for invalid handles in `clip_model_load` by
2024-04-27 23:31:07 +0100
47fcf950
(refs/pull/692/merge)
Merge f051fedf45 into 84bb5a36ab by
2024-04-27 23:02:03 +0300
84bb5a36
Merge pull request #702 from martindevans/interruptible_async_model_load by
2024-04-27 16:06:40 +0100
1ec0fee5
(refs/pull/702/head)
Added optional `IProgress` parameter to `LoadFromFileAsync` by
2024-04-27 15:04:54 +0100
763c0972
(refs/pull/669/merge)
Merge 8672de429d into b47ed9258f by
2024-04-27 11:00:24 +0200
2aa96b20
Adding Response Format - Correcting non-standard way of working with PromptExecutionSettings by
2024-04-27 09:39:40 +0200
9867b4c8
Only setting callback if the token can be cancelled. by
2024-04-27 02:55:35 +0100
00df7c15
- Added `LLamaWeights.LoadFromFileAsync`. - Async loading supports cancellation through a `CancellationToken`. If loading is cancelled an `OperationCanceledException` is thrown. If it fails for another reason a `LoadWeightsFailedException` is thrown. - Updated examples to use `LoadFromFileAsync` by
2024-04-27 02:52:41 +0100
b47ed925
Merge pull request #701 from AsakusaRinne/add_issue_template by
2024-04-27 03:59:27 +0800
bcf3ef1e
(refs/pull/701/head)
Fix typo in issue templates. by
2024-04-27 03:58:45 +0800
c6565c3a
Merge pull request #700 from AsakusaRinne/add_issue_template by
2024-04-27 03:56:58 +0800
d56eb1a5
(refs/pull/700/head)
Add issue templates. by
2024-04-27 03:38:20 +0800
2c19b8b8
rename the llava library name. by
2024-04-27 03:13:24 +0800
ee0d07a0
Merge branch 'auto_download' of github.com:AsakusaRinne/LLamaSharp into auto_download by
2024-04-27 02:23:57 +0800
31ff3636
fix: resolve comments. by
2024-04-27 02:23:36 +0800
18586cc4
Merge pull request #696 from martindevans/safe_handle_constructor_refactor by
2024-04-26 16:14:42 +0100
e9fd7f96
Merge pull request #691 from martindevans/empty_batch_check by
2024-04-26 16:14:28 +0100
a2f85738
Merge pull request #698 from martindevans/slightly_safer_quantize_params by
2024-04-26 13:53:55 +0100
d4f793a7
(refs/pull/696/head)
Using `is` check instead of `== null` by
2024-04-26 13:53:04 +0100
ecb359c9
- Using more specific `LoadWeightsFailedException` when a llava model fails to load (#697) by
2024-04-26 13:39:09 +0100
f0e2a3dc
Update LLama/Native/Load/NativeLibraryFromPath.cs by
2024-04-26 09:18:52 +0800
58ec798b
(refs/pull/698/head)
Modified `llama_model_quantize` to accept argument by `ref` instead of pointer. by
2024-04-26 01:35:13 +0100
f21c6784
(refs/pull/697/head)
- Using more specific `LoadWeightsFailedException` when a llava model fails to load - Passing model path, instead of a message, to `LoadWeightsFailedException` constructor by
2024-04-26 01:11:54 +0100
54dab273
- Removed unnecessary constructors from safe handles - Returning SafeLLamaGrammarHandle directly from `llama_grammar_init` and `llama_grammar_copy` by
2024-04-26 01:03:26 +0100
f051fedf
(refs/pull/692/head)
fix: package name. by
2024-04-25 02:49:32 +0800
e8958645
feat: add experimental auto-download support. by
2024-04-25 02:43:18 +0800
6e28b21d
refactor: remove the auto-download related parts. by
2024-04-25 02:39:25 +0800
25812762
(refs/pull/691/head)
Added checks in `Decode` to skip doing anything if the batch is empty. by
2024-04-24 14:54:02 +0100
dc824d95
(refs/pull/640/merge)
Merge 5a3dba5fc4 into ccc49eb1e0 by
2024-04-24 10:54:08 +0200
59a0afdb
Renaming files to correspond to class names by
2024-04-24 08:24:02 +0200
ab8dd0df
Correcting non-standard way of working with PromptExecutionSettings by
2024-04-24 08:06:40 +0200
156d7bb4
Revert "Standardizing Image Data implementation" by
2024-04-24 07:57:17 +0200
6bd269da
Revert "Simplifying image handling" by
2024-04-24 07:57:15 +0200
8ea82bcc
Revert "Embeddings correction" by
2024-04-24 07:57:12 +0200
5a4c0d46
Revert "Automatic Solution Generator - Work in progress" by
2024-04-24 07:57:09 +0200
5a196ec6
Reapply "Automatic Solution Generator - Work in progress" by
2024-04-24 07:56:58 +0200
ad2c81d9
Revert "Automatic Solution Generator - Work in progress" by
2024-04-19 17:16:52 +0200
b1f3987f
Automatic Solution Generator - Work in progress by
2024-04-19 10:55:36 +0200
3ded2dd7
Embeddings correction by
2024-04-19 08:40:43 +0200
f2640246
Simplifying image handling by
2024-04-08 16:10:54 +0200
b2423fe6
Standardizing Image Data implementation by
2024-04-07 19:47:39 +0200
e6b3b4a8
feat: support auto-download for native libraries. by
2024-04-24 00:20:15 +0800
ccc49eb1
BatchedExecutor Save/Load (#681) by
2024-04-23 15:46:56 +0100
05937de5
(refs/pull/671/head)
Merge branch 'SciSharp:master' into feature/interactive-sk-chatcompletion by
2024-04-22 11:20:45 -0400
28805019
(refs/pull/681/head)
Removed unnecessary spaces by
2024-04-22 01:40:21 +0100
617f7215
Moved the new save/load methods out to an extension class specifically for the batched executor. by
2024-04-22 01:39:45 +0100
00fa7953
Added ability to save/load a `Conversation` to an in-memory state, instead of to file. by
2024-04-22 00:35:00 +0100
c87addc0
(refs/pull/683/head)
feat: add experimental refactorings. by
2024-04-21 09:56:26 +0800
ce76ff1a
Added the ability to save and load individual conversations in a batched executor. - New example - Added `BatchedExecutor.Load(filepath)` method - Added `Conversation.Save(filepath)` method - Added new (currently internal) `SaveState`/`LoadState` methods in LLamaContext which can stash some extra binary data in the header by
2024-04-20 20:32:39 +0100
f01c13ee
Made special tokens included in prompts tokenize as intended (#677) by
2024-04-20 17:23:55 +0300
528c7ae2
(refs/pull/677/head)
Merge remote-tracking branch 'upstream/master' into fix-tokenization-issues by
2024-04-19 20:59:15 +0300
ebc7c24e
Merge pull request #678 from SignalRT/master by
2024-04-19 19:41:28 +0200
b416966e
(refs/pull/678/head)
Disable Metal on CI UnitTest by
2024-04-19 19:38:58 +0200
550f2f76
Fixed build due to changes in unit tests by
2024-04-19 18:31:14 +0100
c6489f96
Made special tokens included in prompts tokenize as intended by
2024-04-19 20:20:42 +0300
8f358e12
Merge pull request #672 from SignalRT/master by
2024-04-19 19:17:23 +0200
df9a549e
(refs/pull/672/head)
Merge branch 'master' into master by
2024-04-19 19:17:09 +0200
3c764409
- Added tests for generating embeddings with generative model and embedding model - Rewritten native API methods for embeddings to return pointers - null is a valid value for these methods to return so `Span` is not appropriate by
2024-04-19 16:30:32 +0100
89217f73
Embeddings correction (#674) by
2024-04-19 17:23:44 +0200
43786b0d
(refs/pull/676/head)
Revert "Embeddings correction" by
2024-04-19 17:19:13 +0200
144837db
(refs/pull/674/head)
Revert "Automatic Solution Generator - Work in progress" by
2024-04-19 17:16:52 +0200
9c91fac2
Automatic Solution Generator - Work in progress by
2024-04-19 10:55:36 +0200
5d442fa2
Embeddings correction by
2024-04-19 08:40:43 +0200
49f437f3
Typo on comment. Disable Metal on MacOS / OSX by
2024-04-19 06:52:58 +0200
53ae9048
Set GPULayerCount to execute the Test by
2024-04-18 22:03:47 +0200
e6b71411
Change attribute to a Filter to disable test on CI by
2024-04-18 21:29:37 +0200
b5b3bf55
Merge branch 'master' of https://github.com/zsogitbe/LLamaSharp by
2024-04-18 09:46:25 +0200
c2f0d9c2
Simplifying image handling by
2024-04-08 16:10:54 +0200
51ba8aa6
Download image implementation by
2024-04-08 10:06:04 +0200
7378e990
Standardizing Image Data implementation by
2024-04-07 19:47:39 +0200
42934351
Extension LLava with in memory images by
2024-04-06 17:22:29 +0200
75cad1f3
Remove .NET7 on test by
2024-04-18 06:36:09 +0200
89fbbc0f
Restore previous dotnet-versions by
2024-04-18 06:13:10 +0200