10-23-2023, 06:20 AM
(10-19-2023, 01:14 PM)Newb_with_chronic_GAS Wrote: I've tried to run through it with ChatGPT and it reckons this "unique" approach would work, but I'm worried now about why it's unique and what I might be missing out on.
I don't think that ChatGPT (or any other LLVM) can offer a valid opinion on the topic, there's simply not enough text out there about what the MCRR can and cannot do for it to be trained properly.
You don't tell us what it is exactly you want to achieve, but I doubt that making a device's MIDI channel dependent on the port you use is a good plan, especially because you use two MRCCs, so you will have each channel twice, and also IRRC, Channel to Port Mapping only maps channels 1-12, so channels 13-16 will never be used.
FWIW, I have more than 20 devices on a single MRCC, which works like a charm, among other things because I can make use of hardware MIDI thru on some devices. You won't be able to do that when the port decides about the channel.
You will lose a lot of flexibility when you design your setup that way, and you will still have to decide which machine should respond to which channel, simply because your sequencer will have to know that channel when it wants to send notes to the device. The nice thing about Channel to Port Mapping is that it allows for using devices that only respond to channel 1 on channels 2-12, too.
The best strategy for mapping this out is drawing a picture, which device needs to send what data to which other device in which case. Then you look at the constraints: e.g. 16 midi channels on one port, mono- and multi-itimbrality, 8 MIDI tracks in the OT, only 6 mods in MRCC per patch, etc.. From that you can create a plan that contains MIDI channels and routings. It's simpler to evolve that plan when it does not involve unplugging devices.