Avis et commentaires
AI Council
•
18 classements
PP
3 mai 2026
Version de l'application : 6.2.6
Works great with the standard models, unfortunately cannot use anything from Mistral with own API keys (testing shows 0 models found).
Réponse du développeur
Hier
Thanks for flagging this - you are right, and the bug is on us. Mistral’s /v1/models endpoint reports its models with a type value our filter was not expecting, so the listing was being silently dropped to zero. Every Mistral chat model your key can access (mistral-large, medium, small, codestral, ministral, devstral, magistral, pixtral, open-mistral-nemo) was unreachable because of it. The fix is in the next release, now in review. Once it lands, deleting and re-entering your Mistral key, or hitting “Test” again, will pull the full catalogue. The same release also fixes the custom OpenAI-compatible provider issue and the “network connection was lost” pop-ups on follow-up replies. Sorry for the frustration caused.
Eric
24 avr. 2026
Version de l'application : 5.3
Extrêmement instable : échoue sur la moitié des requêtes, sans explications, particulièrement si elles sont complexes.
Réponse du développeur
24 avr. 2026
Merci pour le retour honnête - nous le prenons au sérieux. La version 6.0, envoyée aujourd’hui en révision Setapp, corrige directement ce que vous décrivez : 1. Chaque erreur identifie désormais le fournisseur concerné, par exemple “Limite atteinte (anthropic)” au lieu d’un message générique. Plus d’échecs silencieux sans explication. 2. La synthèse du coordinateur diffuse les tokens en direct et relance automatiquement en cas d’erreur réseau transitoire. Auparavant, une session pouvait rester bloquée si un seul modèle bégayait - 6.0 récupère proprement. Pour les requêtes complexes, vous pouvez aussi ajouter votre propre clé OpenAI, Anthropic ou Google dans Réglages → Fournisseurs. AI Council l’utilisera avant de toucher les crédits Setapp, et vous isole ainsi sur un fournisseur fiable. Merci encore.
Jason
19 avr. 2026
Version de l'application : 5.2
I’m going to give this a thumbs up as to not pile on, but, this app needs some work. Good idea, and solid executation, but… It doesn’t recognize my SetApp credit at all. It did a few generations and then told me I was out of credit.
Réponse du développeur
21 avr. 2026
Thanks for the thumbs up, and for the honest feedback - you caught a real bug. A single transient 402 from the Setapp AI gateway was locking the “out of credits” state on for an hour, even after subsequent calls succeeded. So a momentary hiccup on MacPaw’s side would tell you you’d run out when you hadn’t. 5.3 (awaiting review) clears the exhausted flag the instant any request goes through successfully. Also worth knowing: Setapp AI credits are a separate pool from your main Setapp subscription (10/month on base, 125 on AI+ Enthusiast, 250 on Expert). 5.3 also uses cheaper models by default so credits stretch further. Please give it another try once 5.3 lands. Thanks!
Danilo Marinucci
16 avr. 2026
Version de l'application : 5.0
The idea behind the app is interesting, but unfortunately it doesn’t work as expected. It reports that Setapp has no credits left, while Setapp clearly shows that credits are still available. When trying to use models with personal API keys, the configuration fails: the app keeps pointing to Setapp models and does not allow the use of local models for councils. Overall, the setup process is complicated, unclear, and likely not fully functional—or at least very difficult to get working properly. In its current state, the app feels deeply immature.
Réponse du développeur
16 avr. 2026
Thank you for the review. A few clarifications: 1. “Setapp has no credits” - Setapp AI credits (MacPaw’s AI gateway) are tracked separately from your Setapp subscription credits. When the gateway returns 402, your AI allocation is exhausted even though your main Setapp account is active. Our error copy needs to make this clearer and we will update it. 2. “Keeps pointing to Setapp models with personal keys” - BYOK takes priority over Setapp AI automatically. Once you add an OpenAI or Anthropic key, those providers route through your key. The model IDs in the council look the same, but the routing path changes. A related bug affecting follow-up messages was fixed in 5.1. 3. “No local models for councils” - Ollama is supported on macOS. The council is pre-filled with Setapp AI models on first launch; open Settings > Council to swap them for your local models. We will make this more discoverable. A patch is in preparation to fix these points.
Excellent Tool
29 avr. 2026
Version de l'application : 6.2.6
I’m updating my initial review, in which I said that it wasn’t ripe yet due to the lack of its ability to add attache files, pictures, or text over 56k characters. All three of these features are now available. I tested it out on my Gemini, Grok, GPT-4o and Claude models and it worked great in all of the custom groups I created, including by speed (eg, Gemini Flash, Claude Haiku, etc.), and deep thinking (eg, Claude Opus, etc.). I think that this will become an indispensable tool, given its ability to cross-check, compare, and provide the best responses by category from any of my AI models of choice using their API keys. Take some of the other reviews with a grain of salt, since AI will inherently consume a lot of CPU, so if you have Intel or a weak processor you won’t have as smooth of an experience.
Réponse du développeur
30 avr. 2026
Thank you for updating your review - it genuinely means a lot to us. AI Council coordinates multiple providers, concurrent API calls, anonymised peer review across models, and file and image handling. Ensuring reliability across all combinations of models and settings is complex, and issues do occur as you found out. Your original feedback identified key gaps and directly influenced what was prioritised. Good to see the custom groups working - the speed versus deep-thinking split is a core use case. Thank you for your patience and please do feedback with any features you feel may be worthwhile adding (or indeed if you spot any more gremlins!).
Juan Carlos Jiménez Magaña
23 avr. 2026
Version de l'application : 5.3
Me parece que la idea es muy buena, pero hasta ahora solo me ha dado respuestas incorrectas y completamente inventadas, por lo que no confío en su uso. Además que he gastado mis tokens inútilmente
Réponse du développeur
24 avr. 2026
Gracias por la crítica honesta - la frustración es justa y lamentamos los créditos perdidos. Las alucinaciones son un problema real de todos los LLMs. En la versión 6.0, enviada hoy a revisión de Setapp, añadimos Búsqueda Web con 8 proveedores (Tavily, Brave, Exa, You.com, Kagi, SerpAPI, Perplexity, Google). Cada respuesta muestra las fuentes reales como enlaces clicables, para que puedas verificar de dónde viene la información. Para no gastar créditos de Setapp: en Ajustes → Proveedores puedes añadir tu propia clave de OpenAI, Anthropic o Google. AI Council la usará antes de tocar los créditos gestionados de Setapp. No existe una suscripción separada de AI Council. El cupo es parte de tu plan de Setapp AI, gestionado por Setapp. Gracias por la señal.
David
18 avr. 2026
Version de l'application : 5.1
Hi developer,你可以让用户自定义 ai provider 的来源吗,我有一些第三方推理平台的渠道,但是看到设置菜单只有 big company 渠道的。我希望能设置推理来源的 url,而不是限定的那些渠道
Réponse du développeur
21 avr. 2026
感谢您的反馈!您提到的自定义 AI Provider 来源的功能其实已经在您评论之后的 5.2 版本中加入。 使用方法:打开「设置」→「Providers」标签页,滚动到列表底部,您会看到「Custom Providers」区域。点击右侧的 + 按钮,就可以添加任何 OpenAI 兼容的推理端点 - 输入名称、Base URL(例如 https://your-endpoint.com/v1)和 API Key 即可。这样您就能接入任何第三方推理平台。 即将发布的 5.3 版本会把 MiniMax 和 Z.ai(GLM)作为内置 providers 加入,中国用户不需要走自定义 provider 路径也可以一键配置。5.3 已经开发完毕,正在等待审核上架。 如果升级后在使用中还有问题,欢迎继续反馈。感谢您的耐心! Thanks for the feedback. The custom AI provider feature you asked for was added in version 5.2, released shortly after your review. How to use it: Open Settings → Providers, scroll to the bottom of the provider list, and you’ll see a Custom Providers section. Tap the + button to add any OpenAI-compatible endpoint - enter a name, base URL (e.g. https://your-endpoint.com/v1), and your API key. You can route through any third-party inference platform this way. The upcoming 5.4 release additionally adds MiniMax and Z.ai (GLM) as first-class built-in providers, so users in China can configure them with one tap without needing the
Mark Reynolds
16 avr. 2026
Version de l'application : 5.0
I’ve downloaded and installed several local models to use, but the app does not show them in the list of council members, so I’m only allowed to choose from OpenAI, Anthropic, and Google. The local models are shows in my local models list; the app sees them. The Council Members list, however, doesn’t even show them as an option.
Réponse du développeur
16 avr. 2026
Thank you for reporting this. You’re right that the app supports two local-model paths, and only one currently appears in the Council Members list. If you install Ollama (ollama.com) and run ollama pull <model>, any model you pull will appear automatically in the Council Members picker on macOS. This path works end-to-end for both direct chat and councils. The “Link Local Model” feature that scans your disk for .gguf files is a separate path intended to help users locate existing models, but it does not yet run inference against them. That is why they appear in the Local Models list but not in the Council Members picker. This is a gap on our side rather than intended behaviour. We will address it in the next update, either by making linked models fully usable or by making the limitation explicit in the UI so it is not misleading. Apologies for the frustration in the meantime.
Justin
28 avr. 2026
Version de l'application : 6.0
I want to like this app, but it is just unstable, admittedly I am using an Intel Macbook so that might be part of it, but I am finding that it keeps timing out on requests even though the AI models themselves all state that they recived them, I do not have this issue with things like the CLI interfaces or native apps, I am sure its just a bug and teething issues, but still it is most annoying
Daniel Bergquist
21 avr. 2026
Version de l'application : 5.2
Great in concept, but I keep running into bugs: * Takes a long time to timeout if it can’t connect, ruining flow. I’ve had issues with and without unused tokens. * I’ve had one conversation running (call it A), created another one (call it B), tried to return to A, but the app will only show me B’s content. * It timed out/errored on Chair’s Synthesis, but offered me a button reading “Generate Chair’s Report”, but the button is unresponsive. The only way I have been able to make forward progress is by copying context, deleting all conversations, and then pasting context in anew conversation. I get one full prompt-> Chair -> Council -> Chair -> output cycle before it starts bugging out. This of course burns through more tokens. Would also love to see LMStudio support
Réponse du développeur
21 avr. 2026
Thanks - all three bugs are fixed in 5.3 (awaiting review). Connection timeout: dropped per-request limits from 120 - 180s to 60 - 90s. Unreachable endpoints now fail fast so you keep flow. Conversation A showing B’s content: the orchestrator was holding stale results across conversations. It now resets on every switch, so A always shows A. “Generate Chair’s Report” unresponsive: errors from that retry path were silently swallowed. They now surface as a visible banner with the specific failure, and the cross-run state leak that caused the “one cycle then bugs out” pattern is gone. LM Studio: already supported. Start LM Studio’s local server, then in Settings → Providers pick LM Studio (Local). Your loaded models appear in the council picker, no API key needed. Thanks for the thorough report - it was genuinely useful.
Tihomir
17 avr. 2026
Version de l'application : 5.1
This is a very rough first version to be honest. The markdown is not displayed very well, more speificaly the tables. The UI feels like jumping in some places
Réponse du développeur
21 avr. 2026
Fair feedback - both were real. Both fixed in recent updates: Markdown tables: the table parser was rebuilt in 5.2. It now handles empty cells, inline formatting (bold, italic, code, links) inside cells, and column alignment syntax. UI jumping: two causes. The chat view re-parsed markdown on every update and used unstable list IDs, which caused flicker and scroll jumps - rewritten in 5.2 with native SwiftUI markdown and stable message identifiers. A conversation-switching bug in 5.3 (awaiting review) finishes the job on the remaining jumpiness around stage transitions. Please update and let us know how it feels. Thanks!