Skills Marketplace
Browse 2,562 skills across 122 packs and 30 categories
Adversarial Machine Learning Expert
184LTriggers when users need help with adversarial machine learning, model robustness, or ML security. Activate for questions about adversarial attacks (FGSM, PGD, C&W, AutoAttack), adversarial training, certified robustness, model robustness evaluation, distribution shift, out-of-distribution detection, backdoor attacks, data poisoning, privacy attacks (membership inference, model extraction), and differential privacy in ML.
Convolutional Network Architecture Expert
144LTriggers when users need help with convolutional neural network architectures, CNN design patterns, or vision model selection. Activate for questions about ResNet, EfficientNet, ConvNeXt, depthwise separable convolutions, feature pyramid networks, receptive field analysis, normalization layers, Vision Transformers vs CNNs tradeoffs, and transfer learning from pretrained CNNs.
Generative Model Expert
139LTriggers when users need help with generative deep learning models, image synthesis, or density estimation. Activate for questions about GANs, diffusion models, VAEs, flow-based models, DDPM, StyleGAN, mode collapse, classifier-free guidance, latent diffusion, ELBO, autoregressive generation, and evaluation metrics like FID, IS, and CLIP score.
Graph Neural Network Expert
148LTriggers when users need help with graph neural networks, graph representation learning, or applying deep learning to graph-structured data. Activate for questions about GCN, GAT, GraphSAGE, message passing, over-smoothing, graph pooling, heterogeneous graphs, temporal graphs, knowledge graphs with GNNs, molecular property prediction, social network analysis, recommendation systems on graphs, and GNN scalability.
Multi-Modal Learning Expert
167LTriggers when users need help with multimodal deep learning, vision-language models, or cross-modal representation learning. Activate for questions about CLIP, LLaVA, Flamingo, image captioning, visual question answering, text-to-image alignment, contrastive learning across modalities, audio-visual learning, multimodal fusion strategies (early, late, cross-attention), and multimodal benchmarks.
Neural Architecture Search and Efficient Design Expert
182LTriggers when users need help with neural architecture search, automated model design, or model compression. Activate for questions about NAS methods (reinforcement learning, evolutionary, differentiable/DARTS), search spaces, one-shot NAS, hardware-aware NAS, AutoML pipelines, efficient architecture design principles, scaling strategies (width, depth, resolution), and model compression (pruning, quantization, distillation).
Recommender Systems Expert
169LTriggers when users need help with recommendation systems, collaborative filtering, or ranking models. Activate for questions about matrix factorization, ALS, content-based filtering, deep recommender models (NCF, Wide&Deep, DeepFM, two-tower), sequential recommendation, cold start problem, implicit vs explicit feedback, multi-objective ranking, exploration vs exploitation, and real-time recommendation serving.
Recurrent Architecture Expert
147LTriggers when users need help with recurrent neural networks, sequence modeling with LSTMs or GRUs, or modern state-space models. Activate for questions about vanishing gradients, sequence-to-sequence models, attention mechanisms in RNNs (Bahdanau, Luong), bidirectional RNNs, Mamba, S4, and when RNNs still outperform transformers for sequential data.
Regularization and Generalization Expert
161LTriggers when users need help with preventing overfitting, improving model generalization, or applying regularization techniques. Activate for questions about dropout, weight decay, data augmentation (CutMix, MixUp, RandAugment, AugMax), label smoothing, early stopping, knowledge distillation, ensemble methods, bias-variance tradeoff in deep learning, and double descent phenomenon.
Self-Supervised Learning Expert
169LTriggers when users need help with self-supervised learning, representation learning without labels, or pretext task design. Activate for questions about contrastive learning (SimCLR, MoCo, BYOL), masked modeling (MAE, BEiT, data2vec), pretext tasks, representation evaluation (linear probing, fine-tuning), self-supervised methods for vision vs NLP vs audio, DINO and DINOv2, and curriculum learning.
Speech and Audio ML Expert
145LTriggers when users need help with speech processing, audio machine learning, or sound generation. Activate for questions about ASR architectures (CTC, attention-based, Whisper), text-to-speech (Tacotron, VITS, neural codec models), speaker verification, speaker diarization, audio classification, music generation, speech enhancement, speech separation, mel spectrograms, and audio tokenization (SoundStream, EnCodec).
Training Optimization Expert
171LTriggers when users need help with deep learning training procedures, optimizer selection, or training efficiency. Activate for questions about SGD, Adam, AdamW, LAMB, Lion, learning rate schedules, gradient clipping, mixed precision training, FP16, BF16, gradient accumulation, weight initialization, loss landscape analysis, and hyperparameter tuning including Bayesian optimization and population-based training.
Transfer Learning Expert
146LTriggers when users need help with transfer learning, fine-tuning pretrained models, or parameter-efficient adaptation. Activate for questions about pretrained model selection, fine-tuning strategies (full, head-only, progressive unfreezing), LoRA, QLoRA, adapter layers, domain adaptation, few-shot learning, zero-shot learning, prompt tuning vs fine-tuning, and foundation model selection for downstream tasks.
Transformer Architecture Expert
127LTriggers when users need help with transformer model architectures, self-attention mechanisms, or positional encoding strategies. Activate for questions about multi-head attention, KV cache optimization, Flash Attention, grouped query attention, mixture of experts routing, encoder-decoder vs decoder-only design, and neural scaling laws such as Chinchilla or Kaplan.