Explore projects
-
LRK / MobileTransformers - An On-Device LLM PEFT Framework for Fine-Tuning and Inference
Creative Commons Attribution Non Commercial 4.0 InternationalMobileTransformers is a lightweight, modular framework based on ONNX Runtime for running and adapting large language models (LLMs) directly on mobile and edge devices. It supports on-device fine-tuning (PEFT), efficient inference, quantization, weight merging, and direct inference from merged models. It includes advanced generation techniques like Retrieval-Augmented Generation (RAG) with vector databases and KV-cache with embedding reuse. The framework also provides export scripts for converting custom Huggingface SLM/LLM for on-device deployment with custom PEFT methods.
Updated -
Updated
-
Updated
-
LRK / Self-Adaptive Approximate Deep Learning through Slimming and Quantisation
Creative Commons Attribution Non Commercial 4.0 InternationalComparison of Slimmable Neural Network and Any-precision (quantized neural network) on a use case of human activity recognition on mobile devices. A full implementation of an SNN running on an IoT devices is provided as well.
Updated -
LRK / Wi-Mind Wireless Ranging for Cognitive Load Inference
BSD Zero Clause LicenseWireless ranging with participants engaged in different cognitive tasks.
Updated -
Study of mobile UI adaptation for news viewing.
Updated -
Reactions to personality-tailored mobile notifications.
Updated -
IoT-sensed data with users engaged in different tasks.
Updated -
Mobile video watching behaviour while performing different activities.
Updated -
Cognitive load-related physiological signals while playing a game.
Updated -
-
Updated