Datacake Nodes
A comprehensive collection of Node-RED nodes for integrating with the Datacake IoT Platform, OpenAI, and LoRaWAN Network Servers.
Introduction
These Node-RED nodes provide seamless integration with Datacake's IoT platform, enabling you to:
Query device data and sensor measurements
Monitor fleet health across your entire workspace
Send downlinks to LoRaWAN devices
Analyze historical data with flexible time ranges
Calculate consumption statistics for meters and counters
Generate AI-powered insights from your IoT data
Monitor gateway status from The Things Stack
All nodes are designed to work together seamlessly in Node-RED flows, providing a complete toolkit for IoT data integration and analysis.
Installation
All nodes shown in this guide work with all types of Node-RED deployments: Datacake Cake Red, Docker Deployment, local Node-Red, and FlowFuse.
Install directly from the Node-RED palette manager.

Or via npm:
npm install node-red-contrib-datacake-helpers
npm install node-red-contrib-datacake
Node Categories
Datacake Nodes
The core nodes for interacting with the Datacake IoT Platform via GraphQL API:
Datacake GraphQL Config - Store Datacake API credentials (Workspace UUID and Token)
Device - Fetch complete device data and measurements
Field - Query specific field values from one or multiple devices
History - Retrieve historical time series data with flexible time ranges
Consumption - Calculate meter/counter statistics (energy, water, gas, etc.)
Downlink - Send LoRaWAN downlink commands to devices
Semantic - Query devices by semantic types (temperature, humidity, battery, etc.)
Product Stats - Calculate aggregated statistics across products
Fleet Health - Get workspace-wide health overview and metrics
Device Post - Send data to Datacake devices via HTTP API
Raw GraphQL - Execute custom GraphQL queries with full control
View Datacake Nodes Documentation →
Datacake AI Nodes
AI-powered data analysis and report generation using OpenAI:
Datacake AI Config - Store OpenAI API credentials
Datacake AI - Analyze IoT data with GPT models, generate reports, execute Python code
Key Features:
🤖 Multiple GPT model support (GPT-5, GPT-5-mini, GPT-5-nano)
💰 Automatic cost calculation and tracking
💻 Code Interpreter for data analysis and visualizations
🌐 Web search for real-time information
📊 CSV/JSON data analysis
📝 Automated report generation
View Datacake AI Nodes Documentation →
Datacake LNS Nodes
LoRaWAN Network Server integration for gateway monitoring:
TTS Config - Store The Things Stack API credentials
TTS Gateway - Monitor gateway status, connection statistics, and uplink/downlink counters
View Datacake LNS Nodes Documentation →
Quick Start
1. Configure Authentication
First, add the appropriate configuration node for your needs:
For Datacake Nodes:
Add a Datacake GraphQL Config node
Enter your Workspace UUID
Enter your Workspace Token
For AI Nodes:
Add a Datacake AI Config node
Enter your OpenAI API Key (starts with
sk-...
)
For LNS Nodes:
Add a TTS Config node
Enter your TTS Server URL (e.g.,
https://eu1.cloud.thethings.network
)Enter your TTS API Key with gateway read permissions
2. Build Your First Flow
Example: Monitor Device Temperature
[Inject: Every 5 minutes]
↓
[Datacake GraphQL Device]
↓
[Function: Extract temperature]
msg.payload = msg.payload.TEMPERATURE;
return msg;
↓
[Dashboard Gauge]
Example: Energy Consumption Report
[Inject: Daily at 8 AM]
↓
[Datacake GraphQL Consumption]
↓
[Function: Format email]
↓
[Email Node]
Example: AI Data Analysis
[File Read: CSV sensor data]
↓
[Datacake AI]
Prompt: "Analyze this sensor data and identify anomalies"
↓
[Debug/File Write]
Common Use Cases
Fleet Monitoring
Monitor the health and status of all devices in your workspace:
Online/offline status
Battery levels
Signal strength
Connectivity statistics
Energy Management
Track consumption and costs for energy meters:
Daily, weekly, monthly consumption
Percentage changes and trends
Monthly breakdowns with year-over-year comparisons
Automated billing reports
Predictive Maintenance
Use AI to analyze sensor data and predict issues:
Anomaly detection in device readings
Pattern recognition for degradation
Maintenance schedule recommendations
Automated alert generation
Remote Device Control
Send downlink commands to configure devices:
Change reporting intervals
Update sensor thresholds
Trigger device actions
Schedule configuration changes
Historical Analysis
Analyze time series data with flexible time ranges:
Trend analysis
Performance comparisons
Data visualization
Export for external tools
Getting Help
Datacake Documentation: docs.datacake.de
Node-RED Documentation: nodered.org/docs
OpenAI API Documentation: platform.openai.com/docs
The Things Stack Documentation: thethingsindustries.com/docs
API Requirements
Datacake API
Workspace UUID: Found in your Datacake workspace settings
Workspace Token: Generate in workspace settings under "API Tokens"
Permissions: Read access for query nodes, write access for downlink nodes
OpenAI API
API Key: Generate at platform.openai.com/api-keys
Billing: Ensure you have billing enabled and credits available
Rate Limits: Nodes respect OpenAI API rate limits
The Things Stack API
API Key Permissions Required:
RIGHT_GATEWAY_INFO
- Read gateway informationRIGHT_GATEWAY_STATUS_READ
- Read gateway status and statistics
Best Practices
Use Configuration Nodes - Store credentials in config nodes, not in individual nodes
Handle Errors - Use catch nodes to handle API errors gracefully
Rate Limiting - Respect API rate limits, especially with Raw GraphQL node
Monitor Costs - Track AI node costs using the built-in cost calculation
Cache When Possible - Store frequently accessed data in flow/global context
Use Appropriate Nodes - Choose the right node for your use case (don't use Raw GraphQL when a specific node exists)
Performance Tips
Fleet Health Node: Use compact mode (without device lists) for faster responses
History Node: Use "Auto" resolution for optimal performance
Consumption Node: Monthly breakdown adds overhead but provides valuable insights
AI Node: Use appropriate model for your task (nano for simple, mini for most, full for complex)
Semantic Node: Multiple semantics are fetched in parallel for optimal speed
Last updated
Was this helpful?