I apologize for the grammatical error! I’m using talk to text these days
Yeah so I have each of the 12880 and 16767 sensors in the housing with TCS in between them, separated by an alignment mount that isolates the light from one another along with an adjustable slit (during install) to set light intensity coming in and use the TCS sensor to get each unit to roughly the same value.
I have isolated dataset view for each sensor and a combined full spectra view. I also included a unique waterfall detection I came up with many years ago with the daVinci sensor that was so sensitive it could tell when someone walked into the room because the light intensity shifted even though the sensor was getting bombarded with light from an inch away.
I have spectral discharge tubes for various gases that I will use for the base calibration that gets hard coded into each unit’s memory. From there it should only be very minor software calibration like a standard chorma system or GC.
I put a basic AI in our old daVinci sensor for distillation and had it trained to recognize some various spectra. Tested it with various solvents in test tubes and it was quite accurate at identifying all the samples even though to the naked eye they all were visually identical with just the RGBW sensor alone… I am super stoked to get the hamamatsu dataset now as well.
This new version is getting a full on AI model. I am making a clone sensor setup for cuvette samples and I have a 1ml cuvette so I can first reference the solvent and cuvette then load the CRM+Solvent into a cuvette and then learn the spectra of the CRM sample. Once this is done, I will be able to send out known sample spectra datasets to add new known spectra to every sensor we sell.
Absorption spectroscopy is the way. All molecules absorb but only limited molecules fluoresce. ![]()
This is the real reason I am wanting to put a deuterium lamp into our current sensor is because to actually identify cannabiniods you need a very low nm source light. They absorb light in the 190-280nm range to create their spectral ID that typical HPLC’s use. I need a source light that projects that range of light and a deuterium lamp does.
Now ultimately, the real problem is when you have a mixed bag of products flowing across the sensor, you will see a mixed blob of results.
My goal is to have 3 sensors, one on the supply rail as a live reference to the solvent, one after extraction to identify the mixture, then one after CRC to identify how well it did. The AI goal is to be able to suggest what media and amount of each to create an output color based on the input color and known impurities that were detected.
I have considered putting NIR onto it, hamamatsu makes a micro NIR detector as well, but it doesn’t get far enough into the IR range for solid cannabinoid detection.
We can talk about this once I have a fully working model that I am willing to release to the public. I will not sell it until I have already proven that it works and actually works and not just say it works like errormetrix did.
I am down to collab and make this into what it really should be. My ultimate goal is to use this sensor for system automation. Solvent saturation detection is ULTRA simple and comes on both designs.
We are making a full molecular ID model and one that just does RGBW and solvent saturation. It saves $1000 in sensors if your only after solvent saturation detection and crude impurity detection…
From what I understand, Raman is a better/easier option than NIR. Building an inline Raman system is my next analytical device project.
You’ve still got my contact info.
Yeah, I am aware and would like to still work somethin out here once I have a fully working unit and was plannin on contacting you on the project.
I could incorporate raman technically. Was already considering it since it just goes into the source light side. It is not too difficult to do, just gotta align the optics. Uses the cheaper hamamatsu sensor too!
We are getting one to play with soon so I will even have a working unit as a reference…
I’m almost certain @thesk8nmidget would be up to collaborate/collect data if you want to see how these work on someone else’s system.
He, @Lincoln20XX and myself tried really hard to play this game with the Arometrix device(s)…so we are already well versed in how/where/why questions.
I am all about getting it into hands of operators that are capable of properly testing it once I am happy with the data I see. We would set something up with a sensor that gets shipped around for everyone to play with or whatever.
Baby steps first… get it fully working to where I am happy with the hardware and some rudimentary software that works then have it properly revamped by @redonkuless as I know he can produce EXACTLY what I want for the final software output.
Then we will likely do some operator testing with the ship-around unit, make any adjustments to the code necessary, then finally release the unit. All the while in the background getting our custom circuit ETL listed and factory produced and the hardware PSI certified.
Finally, after all that fun, it will be available to the public. I want operators to validate it first.
@Zack_illuminated I can’t wait to get started!
So everyone else:
From the software side, the priority is making sure operators own their data and aren’t locked into a black box.
The system is being built as a local-first, offline platform with clean, well-defined export paths. Raw and processed spectra, calibration states, run metadata, and results will all be exportable in standard formats (JSON / CSV), so data can be analyzed externally, archived long-term, or fed into other tools if desired.
That approach means the software itself doesn’t need to be open-sourced for the ecosystem to stay open — the data is portable and transparent, which is what actually matters.
On the stack side, it’s a lightweight web UI running locally on the tablet, real-time rendering at sensor rates, versioned calibration and template locking per run, and deterministic pipelines before any AI comes into play.
This keeps the system inspectable, reproducible, and future-proof while still allowing the detection models and workflows to evolve over time.
Fully aligned with the plan to validate in the field first, let operators stress it, and only release once it’s proven to do what it claims. Which I have allot of faith in what @Zack_illuminated does.
If your “AI” isn’t deterministic, it shouldn’t be anywhere near process control.
100% agree. Anything touching process control needs deterministic behavior, bounded outputs, and clear failure modes.
When I say AI, I’m talking about advisory layers on top of a deterministic pipeline — not something driving valves or media selection blindly. Control logic should always be inspectable and predictable.
if this is working id love to try it. I know the first question is ROI. What are you thinking for value prop here so I can start brainstorming the pitch to Managment lol
Is there an informal working group on this?