Jorge Alonso PRO

oieieio

AI & ML interests

AI/ML

Recent Activity

View all activity

Organizations

Hugging Face Discord Community's profile picture OIEIEIO's profile picture Lambda Go Labs's profile picture

oieieio's activity

replied to wassemgtk's post about 1 hour ago
view reply

Second Run. Huge Gains.

Model

  • Model: Refined AdaptiveGESAL is a custom NN with dynamic graph-based clustering (up to 30 nodes), trainable SVD layers (SVFLinear), and temporal processing to capture engine degradation patterns over cycles.

Refined Results

Here’s how our refined model performed after training and evaluation on the test set:

  • Average RMSE (Estimated): 29.85 cycles
  • Average Error (MAE): 36.99 cycles
  • Accuracy (±50 cycles): 100.0%
  • Nodes Created: 30

These results are a significant improvement over our first run (RMSE 85.03, MAE 73.95), demonstrating the effectiveness of temporal layers and preprocessing. However, the model still requires tuning to achieve competitiveness.

Comparison to Industry Standards

Industry-standard models on FD001 (e.g., CNN-LSTM, LSTM, or hybrid deep learning approaches) typically achieve:

  • RMSE: ~10–16 cycles

Our refined model’s RMSE (29.85 cycles) shows a big improvement.

class AdaptiveGESAL(nn.Module):
    def __init__(self, input_dim=5, hidden_dim=128, num_nodes=30, distance_threshold=0.05, buffer_size=50):
        super().__init__()
        # Temporal layers
        self.conv1d = nn.Conv1d(input_dim, hidden_dim, kernel_size=3, padding=1)
        self.lstm = nn.LSTM(hidden_dim, hidden_dim, batch_first=True, num_layers=1)

        # SVF layers
        self.svf1 = SVFLinear(hidden_dim, hidden_dim)
        self.svf2 = SVFLinear(hidden_dim, hidden_dim)
        self.output_layer = nn.Linear(hidden_dim, 1)  # RUL prediction
        self.activation = nn.ReLU()

        # Graph structure
        self.device = device
        initial_embedding = torch.zeros(input_dim, device=self.device)
        initial_z_vectors = [self.svf1.z.clone().detach(), self.svf2.z.clone().detach()]
        self.nodes = [Node(initial_embedding, initial_z_vectors)]
        self.distance_threshold = distance_threshold
        self.buffer = []
        self.buffer_size = buffer_size
        self.hidden_dim = hidden_dim
        self.num_nodes = num_nodes

    def compute_distance(self, emb1, emb2):```
replied to wassemgtk's post about 5 hours ago
view reply

RUL Prediction on NASA C-MAPSS FD001 with AdaptiveGESAL

Overview

This project explores Remaining Useful Life (RUL) prediction for turbofan engines using the NASA C-MAPSS FD001 dataset, implementing an innovative neural network model called AdaptiveGESAL. Unlike my previous work with language models, this setup focuses solely on a neural network (NN) with graph-based adaptation and SVD-based linear layers (no LLM), marking an awesome first attempt at this task!

Model and Dataset

  • Model: AdaptiveGESAL is a custom NN with dynamic graph-based clustering (up to 50 nodes) and trainable SVD layers (SVFLinear) to adapt to engine degradation patterns.
  • Dataset: NASA C-MAPSS FD001, a benchmark dataset with 100 engines, ~20,631 samples, and 21 sensor features, used to predict RUL in cycles.

First Run Results

Here’s how our model performed in its initial run:

  • Average RMSE: 85.03 cycles
  • Average Error (MAE): 73.95 cycles
  • Accuracy (±50 cycles): 67.91%

These results are OK for a first attempt, demonstrating that the model learned to cluster engines and predict RUL, but there’s significant room for improvement.

Comparison to Industry Standards

Industry-standard models on FD001 (e.g., CNN-LSTM, LSTM, or hybrid deep learning approaches) typically achieve:

  • RMSE: ~10–16 cycles

Our model’s RMSE (85.03 cycles) highlights a gap, but this first run is a promising starting point. With tuning, we aim to close this gap and achieve competitive performance.

Visualization

Here’s a visual summary of our progress, comparing AdaptiveGESAL’s performance to industry standards:

AdaptiveGESAL Performance vs. Industry Standard (FD001)

The visualization shows:

  • Stable but high RMSE/error over epochs (cyan lines).
  • Node growth capping at 50 (left plot).
  • A significant gap to industry standards (red dashed line, 10–16 cycles RMSE), with annotations highlighting the need for tuning.

Next Steps

This is an awesome first attempt, but the results need tuning to improve competitiveness. Planned improvements include:

  • Increasing training epochs (e.g., 20–50).
  • Tuning hyperparameters (e.g., distance_threshold, learning rates).
  • Evaluating on the test set for true accuracy.
  • Pruning or reducing node count to prevent over-segmentation.
  • Adding regularization or feature engineering for better generalization.

Stay tuned for updates as we refine AdaptiveGESAL to match or exceed industry standards!

Repository

Check out the full code and notebook on GitHub: AI-Adaptive-Learning-GESAL


replied to wassemgtk's post about 9 hours ago
view reply

Awesome. Now I'm trying GESAL on NASA dataset.
Screenshot from 2025-02-28 07-26-20.png

Screenshot from 2025-02-28 07-28-36.png
Screenshot from 2025-02-28 07-27-58.png

reacted to burtenshaw's post with 🤗 2 days ago
view post
Post
5479
Now the Hugging Face agent course is getting real! With frameworks like smolagents, LlamaIndex, and LangChain.

🔗 Follow the org for updates https://huggingface.co./agents-course

This week we are releasing the first framework unit in the course and it’s on smolagents. This is what the unit covers:

- why should you use smolagents vs another library?
- how to build agents that use code
- build multiagents systems
- use vision language models for browser use

The team has been working flat out on this for a few weeks. Led by @sergiopaniego and supported by smolagents author @m-ric .
replied to wassemgtk's post 2 days ago
view reply

GESAL Training Update

Working on some tweaks but here are some earlier training logs.

I’ve captured the latest GESAL training logs for my engine/turbine dataset, showing real-time adaptation in action. Using meta-llama/Llama-3.2-1B on Ubuntu 24.04 (4060Ti GPU, 16GB VRAM; Xeon, 32 cores; 256GB RAM) with InfluxDB 3 Core, the logs detail:

  • Embedding generation for 1,326 synthetic engines
  • Node updates (currently 2 nodes, target: 10-15)
  • Distance calculations (e.g., 0.00985)
  • InfluxDB writes for parameters like flight_hours, exhaust_temp, and wear

Note: The attached image shows logs from my GESAL-Training.ipynb notebook, running on February 26, 2025.
Screenshot from 2025-02-26 12-40-07.png

replied to wassemgtk's post 2 days ago
view reply

@wassemgtk — Thank you for your brilliant work on Graph-Enhanced Singular Adaptive Learning (GESAL) and for releasing the code and white paper on GitHub. Your framework’s innovative approach—leveraging Singular Value Factorization (SVF), graph memory, and reinforcement learning (RL) for real-time LLM adaptation—is truly impressive, efficient, and fast. I’ve adapted GESAL for synthetic engine/turbine data for wear prediction and predictive maintenance, as referenced in @oieieio1’s X update, and it’s shown promising results, though I’ve noticed prompt design has a massive impact on outcomes.

My Implementation

I’m using GESAL with meta-llama/Llama-3.2-1B on an Ubuntu 24.04 system with:

  • Hardware: 4060Ti GPU (16GB VRAM), Xeon processors (36 cores), 256GB RAM
  • Software: InfluxDB 3 Core (version 0.1.0 alpha) for storing and querying ~20 engine parameters, including:
    • flight_hours
    • exhaust_temp
    • vibration
    • rpm
    • thrust
    • pressure
    • wear
    • (and others like fuel_flow, oil_temp, etc.)

My latest analysis of 1,326 synthetic engines shows:

  • Nodes: 2 unique nodes
  • Accuracy: 22.48% on maintenance predictions (keywords: “replace,” “maintenance,” “check”)
  • Earlier Results: 42.0% accuracy and 189 nodes with fewer engines (still experimenting with prompts).

For example, a sample input is:
Engine 1—flight_hours=5000h, exhaust_temp=750°C, vibration=0.5g, rpm=12000, thrust=75kN, pressure=1.5bar, wear=3.2. Predict maintenance.

We’re actively experimenting with prompts—e.g., “Predict maintenance” vs. “Identify maintenance needs for wear and predict ‘replace’, ‘maintenance’, or ‘check’”—and finding that slight changes dramatically affect GESAL’s responses, likely due to temperature=0.7 and top_k=50 in generation.

Goals and Challenges

While GESAL’s scalability is excellent, I’m targeting:

  • Accuracy: 70-80%
  • Nodes: 10-15

The drop from 42.0% to 22.48% accuracy may stem from prompt variations or scaling effects, possibly amplified by current hyperparameters.

Seeking Feedback

I’d appreciate your insights on:

  • Optimizing prompts for consistency and accuracy
  • Tuning hyperparameters (e.g., temperature, top_k, distance_threshold)
  • Scaling GESAL for large industrial datasets (e.g., thousands of engines)
  • Benchmarking GESAL for mechanical systems, as @hassenhamdi mentioned

The attached visualization (Wear vs Exhaust Temperature) shows current performance—I’m eager to collaborate further to refine prompts and boost accuracy!


Thanks again for this groundbreaking tool—I’m excited to see how GESAL can evolve!
Screenshot from 2025-02-26 11-23-19.png

replied to wassemgtk's post 2 days ago
view reply

Trained with GESAL on synthetic egine/turbine data, training for wear and predictive maintenance.

Screenshot from 2025-02-26 10-10-08.png

reacted to wassemgtk's post with ❤️ 3 days ago
view post
Post
1782
# GESAL: Real-Time Adaptation for LLMs


We’re excited to unveil **Graph-Enhanced Singular Adaptive Learning (GESAL)**, a framework that lets LLMs like meta-llama/Llama-3.2-1B adapt in real time using user feedback. Check out the code and white paper on GitHub!

🔗 **Code**: [https://github.com/writer/AI-Adaptive-Learning-GESAL](https://github.com/writer/AI-Adaptive-Learning-GESAL)

---

## Why GESAL?

Static LLMs struggle to adapt without heavy retraining. GESAL solves this with:
- **SVF**: Adapts weights via \( W' = U (\Sigma \cdot z) V^T \), using few parameters.
- **Graph Memory**: Stores adaptations in nodes for scalability.
- **RL**: Updates via \( J(z) = \mathbb{E}[\log \pi_z(y|x) r] \) based on feedback.

---

## How It Works

Ask "How many R’s in ‘strawberry’?" If it says "2" and you say "no," GESAL learns to say "3" next time, avoiding repeats.

---

## Try It

Built with Hugging Face’s transformers:
pip install transformers torch numpy
python Adaptive_Learning_(GESAL).py

Needs a Hugging Face token for Llama-3.2-1B.

---

## Results

GESAL hits 95% accuracy after 5 feedbacks vs. LoRA’s 70%. It’s efficient (~0.5M params) and scalable.
·
reacted to stefan-it's post with 👍 4 days ago
view post
Post
5009
She arrived 😍

[Expect more models soon...]
  • 2 replies
·
replied to fdaudens's post 15 days ago
view reply

If you're building with AI at any scale, definitely worth checking out. Yes! Looks great!

reacted to fdaudens's post with 🔥 15 days ago
view post
Post
2681
⭐️ The AI Energy Score project just launched - this is a game-changer for making informed decisions about AI deployment.

You can now see exactly how much energy your chosen model will consume, with a simple 5-star rating system. Think appliance energy labels, but for AI.

Looking at transcription models on the leaderboard is fascinating: choosing between whisper-tiny or whisper-large-v3 can make a 7x difference. Real-time data on these tradeoffs changes everything.

166 models already evaluated across 10 different tasks, from text generation to image classification. The whole thing is public and you can submit your own models to test.

Why this matters:
- Teams can pick efficient models that still get the job done
- Developers can optimize for energy use from day one
- Organizations can finally predict their AI environmental impact

If you're building with AI at any scale, definitely worth checking out.

👉 leaderboard: https://lnkd.in/esrSxetj
👉 blog post: https://lnkd.in/eFJvzHi8

Huge work led by @sasha with @bgamazay @yjernite @sarahooker @regisss @meg
  • 1 reply
·
reacted to ZennyKenny's post with 🤗 16 days ago
view post
Post
3423
I've completed the first unit of the just-launched Hugging Face Agents Course. I would highly recommend it, even for experienced builders, because it is a great walkthrough of the smolagents library and toolkit.
reacted to davidberenstein1957's post with 🤗 16 days ago
view post
Post
3231
🚀 Find banger tools for your smolagents!

I created the Tools gallery, which makes tools specifically developed by/for smolagents searchable and visible. This will help with:
- inspiration
- best practices
- finding cool tools

Space: davidberenstein1957/smolagents-and-tools
  • 1 reply
·