Home   Archive   Permalink



Nick: Anvil for Enterprise Database Development?

Hi Nick!    
    
You recently (Sept 24) said: "For now, I'm extremely happy with the alternative tools I've found - things are no where near as bad as they were even 5 year ago, and extremely smart and insightful people continue to make things like Anvil and SQLPage, so I continue to love doing development work. I can't wait to see how AI changes the world - I'm focusing on that, much more than programming language development, because its potential to change our world is staggering, and I want to take part in that as much as possible."

    
I want to write and integrate enterprise applications to improve the efficiency of a commercial bakery, with several thousand retail stores as customers. The business currently uses an antiquated custom C# application for orders and AR, and otherwise runs mostly on an amalgam of Excel spreadsheets, Quickbooks and various 3rd party applications. The C# application has already been substantially rewritten, staying in C#, but switching to Linux, Postgres, and the Avalonia GUI.
    
I want to improve efficiency, and data integrity, in various administrative tasks, but also warehousing, payroll, etc. by personally leading the writing of additional functions to in-house applications. I haven’t done any substantial programming in decades, but I have been watching Python and I’m thinking to write these various ancillary “glue” applications in Python and integrating, as necessary, with additional RDBMS tables. This would be a bit of a challenge for me, but I am confident I can, at least, lead new software development and, ideally, get my head back into programming as a personal challenge/upgrade.
    
I gather you think “ANVIL” is a solid enough way to go for something like this and will save a lot of time while delivering a solid base for the work? Can Anvil work with Postgres as the database? What GUI does it use? What editor and other tools do you think I should look at? Have you used any “AI” assistants specifically with Anvil? Any other suggestions you might have?
    
I’m reading a lot about how AI-code generators are in process of changing the whole field of application development, with programmers becoming increasingly irrelevant for code writing per se, but still being very much needed in a new, higher-level, design and orchestration role. What do you think? You seem pretty keen about AI’s impact on coding. Is ANVIL still a great foundation layer, with the code in Python? Or do you have anything else to recommend I look at?    
    
(FWIW, I know I don’t want Racket — it turns into spaghetti, to my eyes, much too quickly. I am similarly not that keen on working personally with C#, but think Python is likely the right, EASILY READABLE, and widely adapted glue language for these ancillary applications, some of which need to tie to the internet a bit.)
    
Thanks for any advice you might have! Years ago I did a lot of Unix work, tying together various application functions via shell. I am thinking Python is the best choice as a modern scripting language, especially given its readability and #1 library and community support.

posted by:   stever     27-Dec-2024/22:37:59-8:00



Hi Steve,
    
Anvil is still my favorite Python web application framework, and it's stood up well in some significant legacy software migration projects over the past 2 years. It's also suited well to AI code generation - any Python system is suited well to that. For example, GPT can work with virtually any well known Python library out of the box. I've used it extensively to help generate production SQLAlchemy schema and query code. It's absolutely fantastic at working with database code using that library. If you're not familiar, SQLAlchemy is the most popular database ORM for Python - it can connect to most common databases, including Postgres.
    
I imagine there are a number of branching decisions about the best tools to use, depending on your data pipeline. For example, do you want to continue to import data from spreadsheets, or do you want to create an integrated UI in your application to eliminate the need for spreadsheets altogether (or some mix of those 2 options)? Either way, Python can handle any of those possibilities - for example, the Pandas library is most often used to import .xlsx spreadsheets, CVS files, etc.
    
Anvil can be used as a framework to handle the front end UI work, the database ORM, the IDE, Git repository management, authentication system, REST APIs, HTTPS termination, etc. - or you can break out any of that functionality, and use other libraries and tools, or pure SQL, for example, to interact with any RDBMS. You could migrate your existing data to use Anvil's integrated Postgres database and ORM, or you could integrate any existing database with SQLAlchemy, or with pure SQL code. You could also choose to build your own authentication system from scratch, use your favorite reverse proxy for HTTPS termination, set up your own Git workflow, integrate third party UI systems, FastAPI for REST APIs, etc. Any you can do any or all of that using other Python tools (Flask, Django, etc.).
    
You could also choose to use SQLPage to build UI and REST APIs, with pure SQL to interact with the database - and call Python (and/or any other language) code to manipulate the results of any query, REST API call, etc. SQLPage is a much simpler system, with more limitations, but it can be integrated directly with any other web development system.
    
If you'd like any help looking through your current legacy system, you can reach me at nick@com-pute.com or call 215-630-6759. I'm happy to take a look and suggest some potential migration and legacy modernization options.

posted by:   Nick     28-Dec-2024/1:00:46-8:00



I just took a look at Avalonia - it appears to be a fantastic and powerful UI framework, clearly aimed at desktop and mobile development with support for web assembly. Anvil is a web framework with a focus on productivity and really pleasant integration of the entire development toolchain (IDE, project management, UI, database, hosting, etc.) - although any of those integrations is optional (most of the projects I do don't use anvil.works hosting - they're typically hosted in data centers managed by the organization IT department, DBA team, security team, etc).
    
I'm curious why you want to migrate from the C# ecosystem, particularly if the existing application has been substantially rewritten with a modern UI framework, and you're using a modern RDBMS like Postgres, you already have it all implemented on Linux, etc. Most of your data integrity issues will likely be handled at the SQL level: well defined schema definitions, foreign keys, constraints, triggers, cascade deletes, logic to handle edge cases, etc. You can certainly do that with Python ORM tools such as SQLAlchemy, but there are popular and capable ORMs in the MS ecosystem too. I'm happy to show you the Python options, but there are corollaries to all those tools and libraries, for C#, and I expect GPT and other AI code generators can do a great job with C# too.

posted by:   Nick     28-Dec-2024/1:29:38-8:00



To be clear, if your main focus is to extend all the software you already have in place (all the existing UI and complete workflows), and if you have a large investment in the C# ecosystem tools that are currently implemented, then it may be worth learning how to use Avalonia, a C# ORM, and other tools in the C# world.
    
If your main focus is instead about *adding new/separate microservices, writing additional functions (APIs) for use in existing in-house applications, building ancillary “glue” applications, integrating additional RDBMS tables, etc., for administrative tasks and additional *separate/new warehousing, payroll, activities, etc., then any mature development platform should be able to connect with what you've already got in place. You can connect to Postgres with just about any modern development system (even no-code tools can do that - but using them typically leads to vendor lock-in and ongoing expense based upon usage and/or data/transaction metering - not recommended if you have any development chops).
    
If your main goal it to augment the existing database which is being used by the C#/Avalonia application, and/or any web APIs which that application might call (i.e., continued extension of the existing core application and all your other infrastructure is not your first goal), then certainly give Anvil/Python tools, SQLPage, etc. a try.    
    
Depending upon how mature your knowledge and comfort level with SQL is, and if your anticipated UI and integration requirements (i.e., document import, the need to connect with 3rd party systems, etc.) won't be extensive, then SQLPage may be a great way to get started quickly, without having to learn about any ORM or all the other machinery that goes into a typical web development framework. That is, if you expect the majority of your work to be spent just integrating with a database, SQLPage may be a nice little system to explore - and you can integrate your SQLPage creations neatly with other web development tools later on.
    
If your focus is on integrating a wide variety of additional tools (not just database, but visualization, analytics, and third party tools) with your existing systems, then Python is likely the best way for you to find the largest number of existing connectivity/integration/data manipulation tools/libraries.

posted by:   Nick     28-Dec-2024/2:30:49-8:00



Every ecosystem has it's benefits: C#, PHP, Java, etc. Python is a safe bet for most modern integration work across platforms and domains of work, but C# should be considered if you're going to be integrating heavily with Microsoft ecosystem tools and environments.
    
In your case, the benefit of using C# is that you already have an investment and working production code. Depending upon how valuable that existing investment is, how well it's currently implemented and working for you, how much you want to extend the existing system with the current language tools & frameworks that are in place (Avalonia looks like a particularly nice UI system), and how well those frameworks are suited to enabling whatever extended work you have in mind, you might really consider choosing to stay with C#.
    
The biggest benefits of adding to your existing software with Python tools, all have to do with the ubiquity of Python's implementation in so many different sorts of integrations. If you want to implement AI functionality, for example, you'll be working with Python. Many generic APIs and SDKs also have Python example code written, for example, or are prepared to be used immediately with Python (just because Python is so popular and generally easy for people to use, compared to C#, for example). If you want to do any work with hardware integrations, for example, often the hardware can be expected to have a micropython SDK or Python API interface, for example.
    
No matter what you do in any modern system, you're likely going to be dealing with RDBMS (and you mentioned that you're currently relying on Postgres), so having a good handle on SQL is essential, not just for queries, but for schema management, database handling, etc. One of the biggest troubles with most modern frameworks, in my opinion, is connecting development language with SQL tooling. Most modern frameworks use ORMs to 'simplify' those SQL-to-programming-language integrations, but of course ORMs add more layers of tooling, and almost always there ends up being some need to work with pure SQL - or to add even more tooling such as migration systems (Alembic, for example), to handle the end-to-end development workflow. This is a complexity issue with virtually every language framework, and that's what's so interesting about SQLPage - there's no ORM. The idea is to build everything around pure SQL, and to add a light UI/API dialect layer directly into SQL code, which tremendously simplifies connecting UI and REST API functionality, which for many data management projects, is all that's needed. The issue with SQLPage is that doing everything in SQL is typically much more verbose, and sometimes more challenging than with 'programming languages' - even basic logic requires complex CTEs (which for many DB professionals are as natural as breathing, so it's not a problem for them...).
    
Anvil has been particularly useful for me because it satisfies such a wide range of integration capabilities. It absolutely rocks when building applications from scratch, using the built in database, ORM, UI and API building tools, IDE, auth, project management toolchain, etc. The ability to work with absolutely every piece of a project entirely within an integrated web based environment, without having to install any software on development machines, and all with simple Python data structures and objects that can be passed between front end, back end, database, etc., all *without any serialization effort* anywhere (no serializing JSON data structures between front end JS, back end programming language data structures, and then converting between SQL calls, etc.), and with deeply integrated autocomplete across the full stack, altogether makes for tremendously productive, fast and enjoyable workflows. The more you break out pieces of the full Anvil system, the less slick it becomes, but I've still had great experiences working with Anvil, even when the database system, ORM, auth, HTTPS termination, and other pieces are all moved out of Anvil tooling. If you choose to not use the integrated Anvil database and ORM, for example, then you're simply in normal Python world, with a massive set of choices to integrate with whatever database and ORM you choose (SQLAlchemy is the tool most often chosen to do database work in the Python world, but there are many other ORMs and database tools). And you can choose to combine those options. I'm working on a project right now for a client who's business is at-home physical rehab. They want to integrate/automate their main business software with their insurance authorization system, and the business software provider sells a MSSQL product which they can use in-house to enable automations and additional report/analytics capabilities which are appealing and valuable to my client. So in this case, I'm using SQLAlchemy to connect with their in-house MSSQL database, and the Anvil native database system for all application-level data management (auth, user roles, logging, etc.). That sort of flexibility is what's put Anvil in the sweet spot for me. You can use Anvil-only tooling, and even the built-in hosting at anvil.works, to build quick applications in a few minutes - and then move progressively towards more complex requirements such as hosting within any environment, connecting with any RDBMS, using Python's massive library ecosystem to support virtually every domain of work, etc.
    
I hope that makes sense.

posted by:   Nick     28-Dec-2024/12:12:50-8:00



Whatever tools you choose to use, generative AI is going to make the development process dramatically simpler and more productive. We have thoroughly tested production SQL schema of 1000+ lines, and many SQLalchemy queries of 600+ lines, which have been generated, and which have been continuously updated and optimized entirely by GPT. Issues with data integrity, and so much of the work of choosing libraries, writing, debugging, optimizing, integrating and improving code is just handled immediately by AI code generation. It's an entirely different world to do the grinding work develop software, than it was even a year ago. You're going to be absolutely staggered by how smart AI is at understanding goals, and creating working code to achieve those goals.

posted by:   Nick     28-Dec-2024/13:38:16-8:00



Note that Python is much slower than C#. If the applications have performance sensitive parts, Python may not cut it.
    
Programmers are becoming less relevant for writing code, but they remain very relevant for evaluating and fixing code written by AI. It also can't be done wholesale. The process is to have AI write a specific piece of code, then evaluate it manually, then tell the AI what needs to change, and so forth, building up an application until it is complete.

posted by:   Kaj     28-Dec-2024/13:45:41-8:00



Nick, thanks for your guidance and suggestions. Also thanks for the contact information; which I will very likely make use of -- but promise not to abuse.
    
Nick said: "In your case, the benefit of using C# is that you already have an investment and working production code. Depending upon how valuable that existing investment is, how well it's currently implemented and working for you, how much you want to extend the existing system with the current language tools & frameworks that are in place (Avalonia looks like a particularly nice UI system), and how well those frameworks are suited to enabling whatever extended work you have in mind, you might really consider choosing to stay with C#."
    
There is a lot of wisdom in what you are advising in sticking with C#, or at least leading with it; in truth I can't escape it entirely, anyway, because I was never proposing to throw away the working C# code which is already working under Linux. On the other hand, I wasn't the author of that re-written code (an employee at the bakery did the heavy lifting, and turned it into a complete re-write and clean up back to first principles).    
    
To some degree I am confusing personal preference with what makes the most sense right now. I've been tinkering with both C# and Python for at least ten years. C#, obviously, looks a lot like C, which I used to use, and Turbo Pascal, which I also used to use, and like, a long time ago. In truth, my memories of C coding are mixed.
    
I've despised "Windows" for decades, yet think fondly of what I could do with Unix even before Windows and long before Linux took off.
    
Until Avalonia, there really wasn't any good way to keep C# yet escape Windows. But Avalonia has now been under development for approximately as long as "Red." Unlike Red, however, Avalonia has finally escaped from being speculative to being pretty solid. JetBrains incorporating Avalonia support into, and argubly as the foundation for, https://www.jetbrains.com/lp/rider-avalonia/ marked a fundamental development in C# (prior to that, C# was available on other environments but with poor Postgres support and basically non-existent GUI and IDE support. So I think Avalonia is proven enough, now, that we can finally call C# as finally graduating into an OS independent coding language.
    
A little more of the bankstory is probably in order. I stopped programming in the early 1990s. That happened pretty decisively and, in the aftermath, I found I had little to no motivation to go back to programming, even as a hobbiest.
    
But circumstances have changed enough in the last few years that I think I am ready to dive pretty deep, once again. I was thinking of studying to become a later life "engineer" and it occurred to me that (1) AI coding assistants were looking pretty cool and -- based upon demonstrations I'd seen -- going back to programming, now, might actually be pretty exciting and (2) Programming is often referred to as "software engineering," so stepping back into programming might provide a useful, and motivating, first step cum segue!
    
So that is more of the backstory. And other events in my life seemed to be strongly nudging me to step into the development breech to make sure the C# rewrite actually got the priority it needed to transition to Linux before the old Windows 7 (yeah, you read that right) LAN system hardware finally expired. Sticking with Microsoft SQL and getting sucked into the impossible-to-escape and now-insanely-expensive Microsoft "upgrade" trap simply isn't something I could forgive myself for.
    
;-)
    
    
So, it is kind of a confluence of events, in the aftermath of all the recent world-shaking changes, that led me to post here to you. And, yes, sadly, I'd been following Rebol -- then Red -- for a long time as well. I kept hoping it really would break through to a glorious and reliable 1.0 release around 2020. So, that is how I remembered your helpfulness and this forum!
    
So, the C# with Avalonia will be the core of the system, and that is the first and most-important piece the bakery needs. But I thought Python with AI code completion would be the easier and more attractive path for me, personally, to get back into software development immediately...and provide that kick-start into "engineering". Python just seemed like a much more attractive, easier on-ramp to SQL and programming for me in 2025. And most of the applications are pretty substantially distinct from the larger program at the heart of customer support and accounting.    
    
The necessity to now jump back into the water, boldly, even as all the other pieces rather suddenly and semi-miraculously seemed to come together, even as my fear of getting back into programming dispelled, seemed auspicious. And I thought of your earlier post about finding Red alternatives. And here we are. ;-)
    
Glorious New Year to all sentient beings of good will!
    
-steve

posted by:   good advice!     28-Dec-2024/15:42:25-8:00



Kaj's note that Python is much slower than C#, is a legitimate point, and was actually a pain point in one project this year. We had to convert one deduplication process from Python to pure SQL because the Python code needed to run on a small VM with only 1 core doing the heavy lifting of 630 trillion computations, and Python couldn't even begin to provide a solution in that environment. The fact is, though, SQL implementations have 50 years of optimizations under their belt, and often can squeeze the best possible performance out of hardware. And the thing with tools like SQLAlchemy is that they're basically just convenience layers over SQL. The Python code isn't doing any of the heavy lifting.
    
ORMs basically keep you from having to serialize and convert data structures, they enable you to think in one language, and to take advantage of schema and data integrity features which are built into RDBMSs. Anvil's ORM goes much farther by integrating data objects which persist between front end, back end, and database - it's hard to convey the facility and productivity which that enables, if you haven't experienced it in any significant way, especially when paired with deep autocomplete features in an integrated IDE (especially one which can run on any mainstream device and which requires no installation :) - but when all of those pieces are put together to orchestrate the work which typically requires SQL code and lots of other messy pieces in the full stack (front end JS AJAX (blegth), JSON data structures, etc.), you discover some very powerful synchronicity which increases development capability.
    
It's the SQL RDBMS which does most of the heavy lifting in a lot of the work I do. Python is just a language interface to it (as well as to the front end components, in the case of Anvil), so the performance of the system doesn't come down to Python's limitations. The Python language layers simply help to connect all the other heavy tooling underneath, in a way that's beautifully straightforward and ease to use. The same is true for systems like CUDA and other tools which involve dramatically heavy computational lifting - they just employ Python as the human language interface. Python isn't being used to perform and of the real computational work those sorts of systems - it's just providing a neat and approachable control interface.
    
The idea of a great framework like Anvil is that all the toolchain quagmires which previously took so much time and work to coordinate, just melt away, and the power/flexibility of the big tools underneath (RDBMS, web UI, existing libraries to connect with any system, etc.) are exposed in ways which are easy to control and *compose*, simply and productively.
    
I can tell you that the project which needed to be moved from Python (SQLAlchemy) to SQL would not have been a problem if the organizational architecture (DB servers running on separate hardware with 64 cores, with absolutely *tremendous network latency between low powered VMs where the Python code ran), had been optimized to run the application server with more compute. Either way, it was a tiny effort to refactor and move the heavy compute processes into SQL stored procedures on the DB servers, and connect them with Python.
    
I just keep having those same sorts of successful experiences with the Python ecosystem. I've been powering through hundreds of challenges of all sorts, over the past few years, which would have been utter show stoppers, without the tools available in the Python ecosystem, and the code generation help of GPT. Even more than the success we've experienced in all these projects, the work along the way has just continued to be fun and satisfying. There are of course always hateful moments, but they seem to pass so quickly with the tools and workflows I've gotten used to. I've come to take for granted 100-1000x productivity gains, ease of development effort, limitless solution choices/options, etc.
    
So yes, Python is slow if you're trying to build everything from scratch with it as the development language, but that's not how Python is used.

posted by:   Nick     28-Dec-2024/18:07:43-8:00



I don't expect Python being relatively slow would be any problem for ancillary applications I would be implementing. The only multi-user application I foresee is the main one, already committed to C# or Postgres.
    
Question for you, Kaj, or anyone: What is a good way to get introduced to / learn/ get comfortable with an AI coding assistant? There are so many being offered right now. Also, is there possibly a book or course anyone might recommend? Should I just jump in with ChatGPT 4.0, or what? How did you guys get introduced -- did you just jump in and figure things out as you went, on your own?

posted by:   stever     29-Dec-2024/14:00:28-8:00



ChatGPT is primitive for coding, because it's not specialised in it. An AI IDE should be better. I'm starting to use the Cursor editor, which is currently most popular. If your platform supports it, you can also try the new Zed editor.

posted by:   Kaj     29-Dec-2024/14:14:07-8:00



I've been using GPT to help with real projects for more than 1 1/2 years. I've had it generate hundreds of thousands of challenging lines of code, in many hundreds of projects - tens of thousands of which have made it into production work. The truly great benefit of using it is that you can speak in chat just like you're speaking with a knowledgeable, capable human developer. You can discuss approaches to solving problems, ask questions about which libraries are available, discuss benefits and drawbacks of making particular architecture decisions, and have it generate code to actually complete those processes with a variety of different tools, etc., as you would if you were dealing with a human pair programmer (or imagine it being like a team of graduate students who have deep and broad knowledge of all the well known tools available in an ecosystem, except you can compact their thousands of man-hours of work down to just a few minutes with GPT). In order for it to work in a rock solid way, you just need to give it enough human language explanation about the context of the project that you're working on, just like you would if you were introducing a human developer to your current work. There have been times when I've spent more than a half hour crafting prompts, to make sure that all of the necessary context is included, and it's just staggering how clearly and intuitively the GPT models understand your intentions (it helps that they understand more than just coding!). You can have it perform debugging cycles, just by pasting in error output. It is fantastically capable of integrating pieces of code, simply by explaining your intent.
    
    
Claude can be helpful to keep on hand. It's extremely capable, but it gets gummed up and slows down after even relatively short sessions. GPT4 and up can handle extremely long conversations without losing context or slowing down at all. In almost every situation, that makes *all the difference*. You need that long, consistent context of understanding to work on development cycles, until you have problems fully solved. And gpt's memory works fantastically well - you can come back to pieces of projects later on, and pick up exactly where you left off, when you want to make changes to functionality. It remembers exactly what you were working on, and can follow directly along with how you want to make changes.
    
Google Gemini has the largest context window of all the current frontier models - it can handle more than a million tokens at once, which means you can upload an entire code base, and it can keep your entire project in context all at once. I haven't yet needed a context size that large, but it's been really helpful for dealing with massive PDF files and other large documents.
    
Deepseek'e new model is absolutely brilliant, and very fast - it approaches the quality of openAI's GPT-01 (their current best available reasoning model). I
d
Deepseek is the cheapest to use of all the frontier models, if you want to run it as an API, and it's very very fast.
    
    
There are tools such as Repplit, which have the built-in tools and agency to build and host entire applications from scratch, just by prompting, but they're extremely limited in available tool choices, and but I've seen they really only accel at creating small prototypes.

posted by:   Nick     29-Dec-2024/16:29:18-8:00



Steve, if you want to get a basic idea of how you can work with GPT to build projects, take a look at this case study:
    
https://com-pute.com/nick/brython-tutorial-and-GPT-case-study.txt

posted by:   Nick     29-Dec-2024/16:33:01-8:00



Also http://rebolforum.com/index.cgi?f=printtopic&topicnumber=30&archiveflag=new
    
And I've covered some more really basic examples of working with GPT on this forum, in my topics about AI coding.

posted by:   Nick     29-Dec-2024/16:35:28-8:00



BTW, take a look at my Christmas animation response to Kaj's post 'Meta in Vintage Computer Christmas Challenge 2024'. That took just a minute with GPT.

posted by:   Nick     29-Dec-2024/18:14:25-8:00



You don't want to upload your entire code base every time, and not to an external party. Cursor and other IDE's use your entire project as context automatically, and Cursor uploads it in RAG form, so it uses up less of the context and your code base is not transferred in recognisable form.

posted by:   Kaj     29-Dec-2024/18:25:01-8:00



You don't have to upload every time, GPT stores the full context of every session. And if you're worried about security, you can cut out third parties entirely and run Deepseek in house.

posted by:   Nick     30-Dec-2024/0:41:41-8:00



Steve,
    
Here's the full ChatGPT session that was used to create the application in the case study I linked above:
    
https://chatgpt.com/share/c395ee46-40db-437e-b692-975652120f99
    
Here's the application:
    
http://server.py-thon.com:5001
    
(username: joe@guitarz.org password: 12341234)
    
The application features include:
    
1) A complete authorization system built from the ground up, with validated email signup and automated forgotten password handling
2) A UI navigation system with collapsible left hamburger menus which show-hide responsively, according to screen size
3) An account management page which enables users to edit their own first name, last name, email address, and password (once logged in by the auth system)
3) A full-featured datagrid that enables inline editing of data values in a contacts database table (with full create, read, update, delete capability). There's no need to press any 'save' buttons while editing data in the grid - all edits are automatically updated in the database. Users can sort rows of the datagrid by clicking any column headers. Users only see and edit their own saved contact data (only data associated with their account, protected by the auth system).
    
All database schema, back-end logic, front-end logic, and every single bit of code was created from the ground up entirely by prompting chatGPT4o, and all debugging was performed completely by simply pasting error messages and questions into GPT. None of this application was created at all by any manual coding by a human, and every bit of functionality and styling exists *exactly as was specified in chat prompts.
    
Here's the source code which was generated from the full conversation linked above:
    
https://com-pute.com/nick/contact_app_no_docs.zip
    
    
That little demonstration case study example, performed during a single impromptu sitting on a Saturday morning, is trivial compared to the tens of thousands of lines of far more significant work GPT has completed for me over the past year and a half - it's really just in introductory glimpse at what's currently possible (and GPT's capabilities have *dramatically improved since that example).

posted by:   Nick     30-Dec-2024/1:35:15-8:00



Here's some deep background on Cursor:
    
https://youtu.be/oFfVt3S51T4?feature=shared

posted by:   Kaj     30-Dec-2024/7:49:08-8:00



I'll look forward to watching all that :)
    
I used cursor a bit and found it didn't work well for the sorts of work I was focused on at the time. For example, providing documentation for large REST APIs, asking questions about functions in the API, and iterating through changes in functions by providing response data to requests. That just works beautifully in an interactive chat environment. Also, debugging web applications where most of the interactions, errors, etc., are in the browser f12 tools, and for example uploading images (as I did with the little Chrismas present example yesterday), all works fantastically with GPTs interactive interface.    
    
I'll take a look again at Cursor again and see if support for those sorts of workflows has improved. Thanks for the nudge!

posted by:   Nick     30-Dec-2024/17:10:44-8:00



There are also a lot of collaborative situations I get into, where GPT's flexible interface is critically useful. For example, someone may pass me a SQL schema, and we may need to decide whether to choose use SQLAlchemy to perform part of a project in the application code base, or use a stored procedure in the RDBMS. In that sort of situation, we don't have a shared development environment, no shared editor, etc. - often I'm working on big projects where different teams all connect to the same database, and for example, a Data & Analytics team member may be building SQL views, while I'm building application CRUD UI to deal with the same tables. We can all work collaboratively, generating code on the fly in our own environments, sharing relevant bits of code and having GPT interact with ad-hoc functions all without having to set up a structured editor environment just to get collaborative one-off tasks completed. And in a large project, it's often the collection of those one-off tasks which form lynchpin pieces of the complete system.
    
In the end, it's the strength of the the model's ability to understand context, understand goals, and work quickly and reliably at producing not just working code, but structural design in a project, and also debugging intelligently, morphing and integrating previous versions, etc., as requirements change and evolve, etc. GPT's ability to just continually pound away at changing situations in a project, and to keep the entire history of conversations in memory, while discussing and evaluating solution options, has been at the core of how it really helps. For example, Claude Sonnet 3.5 is definitely 'smarter' than GPT4o at writing effective little bits of tough code, but I'm leaps and bounds more productive with GPT because it works so quickly, and it deals so effectively in the long term with how my development cycles evolve, all in a single context which can grow to be many many many times larger than any conversation I could ever have with Claude. It's felt like an industrial strength work-horse tool, where Claude has ended up being more of a show-horse in big projects. Claude has pulled off some amazing and useful bits, but it don't end up cranking away at normal work with it.
    
My first experience with Cursor was that it performed code 'generation' tasks well, but it didn't help with extended, evolving development cycles in bigger projects - more of the larger context engineering decisions which come from considering, examining, and implementing multiple choices, taking the best solutions and integrating them all. The free-form conversational platform and strengths of GPT have been best so far at all of that.
    
And finally, working with multiple models has been fantastically helpful in many cases when one model gets stuck. That's happening less and less as the models all get more capable, but the workflow of having one model inspect, correct, debug, suggest alternative solutions to the work another model has done, has been a really useful technique. Again, just having a freeform interface, where I can paste full conversations from one model and engage another model to take part, is very flexible to work with, and has been a game changer in a few cases of dealing with complex difficult problems. I expect Gemini's enormous context is going to be very helpful at some point, when working with that approach.
    
So, there's not one solution right now. Just use the best tools for the job. I'm sure for many people Cursor will do everything they want and need. The only way to know is to do your own work and figure out how to be as successful and productive as possible - always keeping an open mind, because everything in the AI space is changing so ridiculously quickly.

posted by:   Nick     30-Dec-2024/17:47:15-8:00



In Cursor and many other IDE's, you can quickly switch models. With ChatGPT, you just have the one model.
    
There is also a chat mode, which is the only mode in ChatGPT. In the IDE's, there are specific AI helpers for specific tasks.
    
The IDE's are for collaborating on code between humans and AI. It seems to me, when you use the ChatGPT model for coding, you put all your bets on trying not to touch or even look at the code, because it makes it hard to continuously integrate manual and AI changes to the code.

posted by:   Kaj     30-Dec-2024/18:20:29-8:00



One of the other major purposes of GPT, for me, is solving problems other than 'coding'. I end up doing a lot of installations, making choices about OS, IT, and other software development-adjacent requirements. I regularly use GPT to install and configure environments. Last night I had to install MS SQL Server 2017 (only that particular version was allowed for this project), on Ubuntu 20.04. We ended up doing a Docker install, which I wouldn't have chosen to do without GPT's input.
    
    
Very often, installing environment requirements is a long and convoluted process, and GPT always has all the answers ready to go. I interact with it like a capable person who has virtually unlimited breadth and depth of knowledge, and who has unlimited patience when dealing with issues of any sort. And now with vision and voice, is has actually useful situational awareness. My girlfriend's daughter is dealing with potential problems with hypoglycemia, and a conversation with GPT Voice in the car ride over, yielded great helpful guidance - I wouldn't have had time except in that car ride. I find myself using GPT voice more and more even when walking the dog on cold nights when I don't want to take my hands out of pockets. And a few of my clients are using GPT vision to help them understand how to perform computing tasks - as if they have a human helper sitting their with them, telling them what to click on, to navigate through activities.
    
I keep subscriptions to any AI services that end up being useful, and not just for programming - Rockfactory uses Suno multiple times every day. GPT has consistently paid off in many ways.
    
Thanks again for sharing your insight with Cursor. I'll try it again!

posted by:   Nick     30-Dec-2024/20:28:36-8:00



BTW Kaj, it might be a good time to try fine tuning Gemini on Meta documentation and examples. They make it super simple:    
    
https://ai.google.dev/gemini-api/docs/model-tuning

posted by:   Nick     30-Dec-2024/20:57:21-8:00



It may also be worth giving Deepseek fine-tuning a shot:
    
https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/README.md
    
Deepseek was actually surprisingly capable in version 2. I'm really looking forward to doing more with version 3. Aside from the capability, Deepseek is especially interesting because they trained it with about 1/11th the compute needed to train other frontier models.

posted by:   Nickfin     30-Dec-2024/21:16:57-8:00



Kaj, I get a kick out of your comment: 'when you use the ChatGPT model for coding, you put all your bets on trying not to touch or even look at the code, because it makes it hard to continuously integrate manual and AI changes to the code'. Oh stop it. The opposite is true. I'm constantly learning from each day's interactions with GPT. Working with generative AI is the same iterative process that software development has always been. GPT just does a huge portion of the necessary tedious work, and even brilliant men like you can learn something from the output it provides, because its capabilities are shaped by the work, experience, and knowledge of millions of people who created the data which it was trained on. Treat it like a colleague who filters all that human experience into malleable chunks of immediately useful capability, and you get to learn from every bit of output which you integrate into your own insights and hard work. It's a great teacher, laser focused at producing solutions immediately relevant to problems at hand. Approaching work with generative AI, with that attitude, is the opposite of 'trying not to touch or even look at the code'.

posted by:   Nick     31-Dec-2024/3:16:55-8:00



Also Kaj, you said 'In Cursor and many other IDEs, you can quickly switch models. With ChatGPT, you just have the one model'. It's in fact the other way around - in Cursor you're stuck using only the AI models they support (only GPT4 and 4o, Claude and Cursor Small). Constraining work to Cursor disables freedom to choose all the *most capable models (GPT-o1, Deepseek3, etc.), the models with the largest contexts (Gemini 1.5, 2, etc.), the ability to run open source models in-house (Qwen, Llama, Deepseek), etc. I've used all those tools in the past week, because they all have their own niche benefits - for example Google AI Studio's context size and super simple fine tuning. I just choose the best tool for the job, without constraint.
    
Another issue with Cursor, for me, is that it needs to be installed locally. I regularly shift between multiple different work environments, and constantly use many different machines. The ability to open a web browser on any device, with any operating system, log in, and pick up work exactly where I left off - even if just switching between machines in different rooms in the same building, has been one of the most convenient, liberating and enjoyable changes in workflow that came with using Anvil. GPT and all the other web based interfaces fit right into that convenience - I just pick right up where I left off, whether it's on Windows, Linux, Android devices, my phone, my girlfriend's iPad, etc. That convenience actually contributes significantly to my ability to get work done, whenever/wherever I have free time, in whatever environment I happen to be in (on a couch, in a hammock, at Rockfactory, in bed before sleeping or right after waking up in the morning, in a park, eating out somewhere, etc.). That makes a world of difference to me and how I work best and most enjoyably in my life. Other people prefer to have a single workstation and different work patterns - to each their own. I'm just sharing the benefits of experiences and tools which have been most effective for me.
    
If Cursor's features end up being more useful than they were the first time I tried it, I'll be eager to use it. Thank you for sharing your experience and insight :)

posted by:   Nick     31-Dec-2024/9:34:29-8:00



That's what I thought. I fully understand that a consultant juggles multiple things and benefits from generic models. But that's not what a programmer does.
    
In my testing, I only got reasonable results from the few best models that were deeply optimised for coding, and were already trained on REBOL. Like the models, I have deeply specialised work, on just a handful of projects. that would be deeply frustrating to do in a browser, or in multiple environments. My work machine is supposed to be deeply optimised, specific and focused.
    
You have a project with multiple source code files, on your local machine, because that's where your toolchain is. How do you get the files into the AI? Then you ask it to make a code change, over multiple files. How do you update your local files? Then the AI fails to produce working code, so you have to make fixes manually in your code editor. How do you get the changes back into the AI?

posted by:   Kaj     31-Dec-2024/11:56:01-8:00



Kaj,
    
Your condescension always impresses (applause 👏). Copying and pasting must of course be inferior to your real 'programmer' methods, but Anvil does enable me to do all my work with an IDE and a complete toolchain that actually runs entirely in the browser. I don't need *any files* on any local machine (of course I do regularly perform automated backups with a single Git fetch, and of course I work offline whenever I want, but I never really want to any more).
    
I'm currently using the web based setup for ongoing production projects with 30,000+ lines of server code (with thousands of lines in some individual modules), and 10,000+ lines of front end logic, along with, for example, 70+ pages of UI layouts in one project, often with multiple datagrids and hundreds of widgets per page, without breaking a sweat. I've completed 8 major contract projects of this general size in the past year, and more than 500 little projects since I started using Anvil (well before using it with any AI code generation).
    
Editing code iteratively takes a moment in my mobile environment, and I click a button to run the application to see my code changes live. Development cylces and code revisions are much faster than with any other environment I've used in 40+ years. I paste debug output into the GPT console when needed.
    
I use GPT to help with particular tasks - it's great at building and optimizing SQL and SQLAlchemy queries, for example. I'll typically start a session by pasting a database schema into the GPT console. I'll often begin working with a bit of a function that I've written, and iteratively expand on its features. For example, perhaps I'll have a small working query template, but then later want to expand the number of fields, using similar join lookups. GPT is great at understanding how to fill out that sort of drudge work, and often those queries get built out to many hundreds of lines each, with all sorts of filtering, sorting, conditional evaluations, return data structures constructed, etc., along the way. Anvil has Git version management built in, so I can copy/paste code as needed, and revert changes whenever needed, though I tend to just save a module with commented versions of each function I ever use in production - I find that to be faster at managing multiple changes without having to search through version histories. Git's awesome, but I tend to use it most for managing the deployment of production code.
    
Or, for example, perhaps I'll have several functions which I want to integrate into one - or vice versa. GPT is fantastic at understanding the meaning and purpose of functions and combining and/or deconstructing functionality into refactored code. That saves so much time. Copying and pasting doesn't take more than a few seconds. I honestly don't want an AI having the agency to interact with my production tools, or to put it's hands directly in my production code base, or to make any changes that I'm not fully involved with, just to generate some code - not yet at least - maybe by the end of 2025 :)
    
Or, for example, perhaps a colleague will write a stand-alone Python function which they want to integrate into the server code, perhaps to perform some statistical evaluation which is related to their expertise. Very often, they'll write their code in a local-to-them development environment, to open a local file, for example. GPT, for example, can instantly integrate that code into an Anvil server function which uses a function template I provide, which passes, for example, some UI values and a file the user submits on the front end. I simply work entirely in Anvil, and copy/paste any code I want GPT involved in, into the GPT console. GPT remembers the entire session conversation, better than any of the other AI tools I've used, even if that session was started and stopped many times over the course of months. And of course that has more to do with the openAI interface, than the LLM model (for anyone unfamiliar, LLM models have no session memory).
    
The same thing may be true of integrating SQL functions and stored procedures which colleagues from various departments in an organization want to integrate. I have to do that sort of previously time-consuming, meticulous work, regularly these days. Now that sort of work is all easily automated with the same sort of workflow. No special development machine coordination or tooling required. They just send me code in a Signal conversation, or using a little file uploader I provide, and the whole workflow is the same as how I deal with GPT and the Anvil IDE together - simple cut, paste, and editor operations (helped by fantastic Anvil autocomplete), the same way we've all been working for years.
    
There's hardly ever a situation when I don't adjust the code GPT creates, and often we'll go back and forth through iterations (sometimes days, weeks, months apart) involving changing logic, parameters, output values, and the way all those values are passed back and forth between front end and back end, and how they're integrated with other processes in a system, but that's just normal development work. GPT just makes all that go faster
    
I don't mind copying and pasting, because I'm never 'trying not to touch or even look at the code' - I'm fully involved with every bit of logic and every bit of the minutia, in any code I ever deliver in production projects.
    
Or, let's say I have a misplaced comma at the end of a line in a module with several thousand other lines. I pop the code into GPT, and it's brilliant at debugging little issues like that which are not necessarily syntax errors caught by the editor. It's also a great time saver when it comes to looking up error codes in debug output. Not only does it provide a clear and helpful summary of the traceback in an instant - it will look at the logic in the referenced lines of code, effortlessly adjust the existing code to handle encountered edge cases, write exception handling, etc.
    
Or say I need to import and parse a document from a third party agency, or output to a specified file format (PDF, CSV, HTML with styling, etc.), or consume data from a 3rd party REST API, or create a REST API from an existing function I've already written, to be consumed by a 3rd party web hook, etc., etc. GPT is great at not only generating code, but suggesting helpful libraries for all those kinds of work. In fact, it makes doing that sort of work so quick to complete, I'll often finish several optional solutions, just to test which solution works best and provides the best long-term viability - where in the past there just wouldn't have been enough time and budget to do all that work. I converse with GPT about those choices, writing and editing prototype code back and forth with GPT, in the console, and then integrate results in the development codebase (iterate, debug, test, move to production, etc.)
    
Kaj, I'm not sure what you're imagining the challenge is, but my workflow with Anvil and GPT (or choose any other model) is blazing fast, productive, and effortless. It's so fast that I *often perform live coding sessions with clients, while they watch, to add/change multiple features in applications, in a single setting. I'm never tied to any single workstation, or any environment, any AI model, etc. I can work remotely, on site with clients, etc. Several of my last big projects were with organizations and team thousands of miles away, in different time zones, etc. For one of those projects, IT and legal restrictions kept me from ever touching a machine on their premesis, and I could never iteract with PHI in their database. I worked only in my own development environment, which connects to my various VPS hosted systems, of all sorts of Linux and Windows varieties, to match whatever systems my clients use. I had a non-technical helper which that organization picked, handle every bit of installation, all regular application updates, etc. And I live-coded solutions, with the help of GPT, while she pasted console debug information in a text chate. With that product, I work with users remotely on Zoom regularly, and do live coding whenever we have a chance, wherever I am.
    
All these AI work routines are massive time savers, but I still do regular old development work. At the end of the case study link I posted for Steve, I also posted a link to a version of the application in the case study, which I created with Anvil in 15 minutes (whereas the GPT code generation took several hours of interactions from scratch - still far better results than anyone could have achieved with Rebol 10 years ago). The point of doing that was to show that some (many) things are still just better done with good old human elbow grease, using capable tools and some experience/knowledge. That sort of regular old development work, without any AI involvement, is still a huge part of what I do every day.
    
I'd be happy to show anyone how I work. I do it under pressure every day in real contract projects that affect livelihoods and organizational success, and I'd be happy to compare my workflow to satisfy a complete set of delivered requirements, in any domain I have experience in, against any other workflow anyone else uses. And they don't need to be lightweight or limited requirements. I do my work under the constant scrutiny of security, IT, DBA, legal, and project management teams, as well as for the joy of satisfied end-users who work in extremely demanding environments such as hospitals, and other stressful interactive work environments - and we've all been getting along beautifully. I support all my clients to their full satisfaction, even though I'm typically the sole application developer (which is a funny thing for you to describe as 'consultant' work Kaj ;), although I often have to interact with data analytics teams and other organizational users/groups who build views and functions to interact with shared database. I'm doing this sort of work with serious HIPAA, PCI and other compliance requirements, for government, medical, and financial organizations, with extremely demanding user experience requirements, etc.
    
Of course that's not interesting to you, Kaj, because you only do innovation which I couldn't possibly understand - muah :)
    
BTW, I've never seen a model that was 'already trained on REBOL', in any way that was practically useful. See my case study a few months ago in the Calender CGI thread. It took hours of work, with LOTS of help from me, who had years of experience using Rebol, to produce horribly inferior output, compared to using fastHTML, with a single in-context training file, after fastHTML had been released only a matter of weeks (so no current model was trained on it).

posted by:   Nick     31-Dec-2024/19:24:08-8:00



I don't know why you want to start off the new year with perceiving condescension that isn't there, but whatever. I just asked you some concrete questions about your process.

posted by:   Kaj     31-Dec-2024/20:36:36-8:00



Kaj, potshots such as 'when you use the ChatGPT model for coding, you put all your bets on trying not to touch or even look at the code' clearly suggest an utter disregard for responsible development practices or even basic professionalism. I mean c'mon, that's a terrible thing to suggest about another professional's attitude and ethics, and it is utterly baseless. I clarified my practices a bit more, in response to your questions about my workflow, which I hope are concrete enough to be helpful about the topic of this thread.
    
Or for example, in another of your many previous comments throughout this forum: 'I can clearly see where Anvil still lacks compared to Notes more than thirty years later. I have seen it all before. I want fundamental progress, not the deterioration all around us in our field, due to ignorance and stubborness'. You say dramatically critical things like that, *without ever actually experiencing the perspective which you criticize*. Have you actually used Anvil in any significant production work? Clearly not, because your understanding of what it does, is capable of helping developers achieve in critical production work that needs to be completed *now, or how it even works at the most basic level is just plain wrong. Your position about how using Cursor is better than GPT is based on simply misunderstanding how the fundamental workflow operates in Anvil (you think it involves files on a local machine, for example).
    
This topic is 'Nick: Anvil for Enterprise Database Development?', with a particular emphasis on how AI code generation can be integrated. I've tried to provide some insight, links to significant concrete examples, direct answers to questions in the context of Steve's existing tooling, etc., about how database queries can be generated with any AI tool, and integrated into Anvil. Cursor is *not an effective tool in that workflow, because of how it works, at the most basic level.
    
Your constant attitude that '[I do innovation, you do well understood automation]', '[You are a consultant, I am a programmer]', etc., just isn't helpful to anyone. I have no argument with you. I've made it very clear, every time we interact, that I respect what you do, even though you rarely share a single line of useful code or help a person solve a problem at hand on this forum. I do actually want to help support your vision about Meta - I even offered some useful links about getting Gemini and Deepseek trained on Meta, while it was pertinent to the discussion here, etc. My previous actual efforts to help you test Meta deployment were derailed by your arguments about using Anvil, Python, etc. in the process. Please remember that I actually did follow through, compiled examples, and wrote and integrated Meta code, when you provided any concrete requests. I also did meaningful work early on, getting GPT to learn Meta in context, to which you responded 'Nice GPT sessions on Meta. Now that is a start! I didn't see it make any mistakes. It's also the first time I see ChatGPT say anything about Meta that isn't utter nonsense'. I've done everything I know to do to be supportive of your vision, because I respect your work.
    
I do appreciate your constant pushback about everything I say in every topic - helpful criticism is always productive - but jeez dude, maybe just be a little more aware of the constant cocky snide remarks, especially those which publicly call into question my work ethic, experience, and professionalism, and maybe focus instead on answering questions and providing useful help and substantive content about the topic.

posted by:   Nick     2-Jan-2025/9:45:39-8:00



Kaj,
    
I do understand that it upsets you to hear me talk about my preference for any ecosystem other than Rebol, on this forum. I understand that you'd like this forum to remain a place of positive support specifically for Rebol related topics only. I've made a particular effort to keep my discussions about Anvil, SQLPage, Python, SQL, AI tools associated with integrations in those platforms, etc., limited to a few topics which I started to compare my experiences using those tools, to my experience using Rebol. Others have since started topics about using those other tools, in which I've responded.
    
The traffic at this site is virtually non-existent, compared to the hundreds of thousands of visitors it attracted many years ago. Rebol was abandoned by its creator and its community, for practical reasons - a no longer supported closed source product which couldn't even connect to network services over HTTPS, was not useful in any production environment.
    
I started writing about Anvil and AI in a couple dedicated threads here, because I expected that some abandoned Rebol users might find my experiences helpful. And the little bit of feedback I got from random passers-by and a few old friends, just kept me engaged, and those threads became a place for me to document more of my experiences, because they happened to be actually useful for a few people.
    
In the past, I spent many hundreds of hours actually helping people solve specific problems with Rebol, on this forum. I wrote several thousands of pages of tutorials and code examples, because they were useful to people, and they helped support a community, which gave back - and a thriving community helped me in return. Now I've done the same thing with Anvil, Streamlit, SQLPage, Brython (Brython hardly, just one little tutorial), etc. And you've heard over and over about how successful Anvil and the Python ecosystem (with some JS, CSS, etc. mixed in), have been at completing an enormous volume of production work, which far outweighs anything I ever achieved with years of experience using Rebol. I think that comparison is valuable to anyone who happens to discover Rebol, only to find out that it's currently dead for virtually every useful purpose. And it's turned out to be the case that it has been useful to a number of people, so I'll enjoy continuing to share, when it's helpful.
    
If and when Meta is useful in a way that I can help support, I'll look forward to spreading the word, sharing experiences, etc. about it. I'm open to every single tool that helps achieve the same sorts of goals which Rebol was successful at achieving. I've said it in so many ways, Rebol is still dear to my heart, it's ethos and implementation are beautiful, and the industry can learn a lot from Carl's vision, and what he actually implemented and achieved with Rebol. But despite my appreciation of Rebol as a language and perhaps a work of technical art, I used Rebol as a practical tool which helped me accomplish real goals in the non-technical world, and I'm interested in sharing how other tools have succeeded where Rebol failed (yes, largely as a result of market forces, more than technical capability). I think that's a relevant topic of interest on this forum, especially given the vacated state of traffic here, and especially for a few limited threads. It doesn't mean that I don't support you and your vision. My sense is that my current perspective conjures some visceral disdain in your world view. I would like for you to consider me, at very least, a casually interested and hopeful supporter, instead. We do share an appreciation of Rebol, at some level, and I do appreciate the challenge you've taken on with creating Meta. I respect your work, and I don't have any argument with you or your goals. I do always appreciate your critical feedback and your point of view. Please just try to be aware that many of your comments exude disrespect and contempt for my experience, which although different from your specialized work, does hold value for others reading here.

posted by:   Nick     2-Jan-2025/10:35:24-8:00



Nick, there are no potshots, except in your imagination. I responded to Steve's questions. I already decided earlier not to engage you anymore, but obviously, that doesn't work. I will now not respond to you anymore at all. I have no place in my life for your personal attacks and dismissive attitude.
    
Steve, sorry we can't respond to your inquiries without colliding.

posted by:   Kaj     2-Jan-2025/14:58:45-8:00



'I will now not respond to you anymore at all'
    
Awesome, thanks :) I don't come to your Meta forum and argue every point you make - it would be fantastic if you finally stopped trying to discredit everything I say on this little forum, which doesn't fit your Meta marketing agenda.

posted by:   Nick     3-Jan-2025/7:08:31-8:00



Steve,
    
The volume of content and discussion here is likely too much to digest immediately. The easiest way for you to begin actually working toward some meaningful progress is to get together a bit of existing SQL schema from your existing application, or just create a some basic example schema with a few tables and foreign keys. Paste the schema into your choice of AI tool and ask it to create some queries which return the results you're interested in, from the tables, joins, etc., in that schema.
    
If you're interested in learning to work with that schema in Python, ask your AI for a tutorial about how to convert that SQL schema to SQLAlchemy, and then generate those same queries in SQLAlchemy. You can regularly ask the AI for tutorials, examples, and explanations about anything you don't understand, or even how to get started achieving any particular task, from scratch. Start with simple examples, and ask the AI about how to prompt it, if you're not sure how to word your prompt.
    
Be sure to tell your AI which RDBMS you're using (Postgres, MSSQL, Sqlite, etc.), and the AI will be able to tailor queries for that particular flavor of SQL. It's often easier to just connect to Sqlite lite when you're building SQL/SQLAlchemy examples, and then migrate your connection string later to Postgres or whatever RDBMS you'll use in production. If you're using SQLAlchemy, you'll likely need to make few, if any changes to your code or logic.
    
If you're using a chat interface with strong memory features, it will remember such preferences for the full context of the conversation. It will also begin to understand more an more about your logical requirements, as it gains additional context about how you intend to implement each iteration of changes to each query. You'll be able to ask simple prompts such as 'please add a filter to that query which searches the ___ table for values which exist in the ___ lookup table, and sort the results by the ___ argument in the function signature. In my experience, ChatGPT's interface and model has demonstrated the best of that sort of long context memory and intuitive understanding of human language instruction, along with high quality output - but at this point in 2025, you'll likely get very good results using any of the frontier models.
    
Working with Anvil simply means running any Python code in an Anvil server module (a Python code file which runs on the server). In the beginning, I would often run and test bits of code on a local machine, but I really never do any of that any more - I just run, test, and integrate everything directly in Anvil. You can put all your imports, schema definitions, functions, etc. in a single code module, and call them from front end functions. You can ask the AI how to pass values from Anvil front end widgets (textbox fields, dropdown selectors, datagrid rows, etc.) to your server functions. All the frontier models seem to be very well versed in Anvil at this point, and GPT has been particular able to understand the intended interactions, simply by pasting in either front end code which are intended to call unwritten server functions, and/or server templates for which front end code needs to be written. In fact, GPT typically is able to write all the front end and back end code, and explain exactly how to perform everything needed to create the full stack process in Anvil. You'll get answers such as 'add a text area widget named textarea_1 to your layout, add this event handler function to the front end form, and add this server function to your back end module' - once you learn how to get working with it a bit.
The same is true for using any mainstream language. Ask your AI model to write SQL queries, for example, or to convert existing code from one language to another. Converting existing language examples can be particularly helpful - and if you aren't fluent in the original language, as your AI model to explain the code line by line.
    
Very importantly, if you're using a tool which your AI model doesn't know anything about, you can typically paste in documentation about the tool, and the AI will be able to reason answers about the code which needs to be generated, simply by understanding that documentation and the general context it will operate within. For example, SQLPage is new enough to be unknown by most models, but pasting in the documentation for any of the UI components typically enables the AI to generate perfectly functioning SQLPage code. This is true for REST APIs which are not well known, and all sorts of other code interfaces that the AI hasn't been trained on. You can paste in documentation for virtually any interface, and the frontier models will be able to decipher and understand how to generate code, from that documentation, especially if the methodology follows well known practices and code patterns.
    
Treat the AI model like a person, explain everything to it about the context of the code you're trying to generate, just as if you were explaining it to an intelligent, experienced human developer, with all the same assumptions about how what you're trying to accomplish should likely work, and all the information a human developer would need to complete the task, and you'll likely get solid output. You'll learn very quickly how to interact with the AI you become most accustomed to - the most important thing to do is provide enough plain English explanation about the context of the data and the information you'll be processing, for it to understand your goal. The more clearly you explain and provide necessary context (using your intuitive human ability to explain what you're trying to accomplish - really flesh it out like you're speaking with a person), the better the AI will succeed at the code generation task. If you're dealing with a chat interface, you can ask for explanations, tutorials, and advice about how to solve problems, what libraries and solutions are likely to solve your use case, etc., along the way, directly in-line in your problem-solving code generation discussion. You can ask the AI to adjust code you've created, and/or code it's created, iteratively, over and over again.
    
One really important hint is to ask questions such as 'please provide the complete updated function, with those suggestions integrated'.
    
Just dive in, you'll find that working with AI makes learning to use new tools dramatically easy compared to reading hundreds of pages of documentation, searching for examples and answers about how libraries work, experimenting with ill conceived approaches to implementing solutions, etc. You can ask very pointed questions about how to achieve exactly what you want, and get tailored code results for your goal immediately - and you can ask for help understanding how to integrate bits of working code into larger and more complex contexts incrementally, every step along the way.
    
Please don't hesitate if you have any questions about how to do any of this.

posted by:   Nick     3-Jan-2025/8:58:33-8:00



Keep in mind, also, that you're not tied to any particular web framework (Anvil), or any particular ORM (SQLAlchemy, and or the built in Anvil ORM), or any other tool choice.    
    
You can choose to code database interactions with any of the Python pure SQL drivers, or any other Python ORM (Pony, PyDAL, Django ORM, Peewee, SQLObject, Tortoise, etc.).
    
You can also choose to integrate Python server code with any web framework (Flask, FastAPI, Pryramid, Django, Bottle, MicroDot, Web2py, Jam.py, etc.).
    
And you can choose to integrate any back end Python web framework with any front end framework, or UI methodology you want. Big frameworks like Django simply provide nicely integrated features for all those feature, and Anvil includes a visual UI builder, a web based IDE with deeply integrated autocomplete (which you'll find is outstandingly productive if you're using all of Anvil's integrated features), file and project management with Git integration, and even hosting and instant deployment, all built-in, if you choose to use all of Anvil's features.
    
You're never tied down to using any particular Python tool/framework, though, and all the Python web frameworks will allow you to connect to the entire Python ecosystem of libraries, which in turn can can connect to virtually any other system, and which enable manipulation of data in virtually any way imaginable. Right out of the box, there's likely a handful of well known Python libraries you can choose from to help achieve virtually any common computing goal.
    
If you want to start with simpler and more commonly used tools than Anvil, I'd strongly suggest Flask (web framework), SQLAlchemy (database ORM), Bootstrap (front end UI layout framework), and jQuery (or just plain JavaScript for Dom manipulation). These are the tools which I've seen all the frontier AI models are most versed and capable of generating working code with (because there are billions of lines of existing code and documentation dealing with virtually every possible development task, already online using those tools). And those 4 tools together will be able to create virtually any useful business application features you can imagine. If you need to add general graphics capabilities, or even to create basic games, plan to use Canvas in the browser, when generating code with AI. Flask and SQLAlchemy can be installed instantly in virtually any server environment with virtually any common version of Python (even very old machines running Python 2.7), and various versions of Bootstrap, jQuery (or plain JS), and Canvas work in virtually every browser available (older versions run in even ancient versions of virtually every imaginable browser).
    
I have case studies showing how to create applications of all sorts using these tools, and they are as mainstream/supported/accepted/interoperable as you get, in virtually any IT environment, and are well known known enough that they can be connected to just about any third party APIs and other systems which exist in the modern landscape of computing. Just using Python and an RDBMS on the server provides the majority of that connectivity capability. Using Bootstrap provides most to the nice looking UI features, easy CSS layout defaults and adjustments, etc., and you can build really powerful/customizable data grids with it (which is a core requirement/feature in so much business software functionality). Anvil just has corollaries of all these pieces, and higher level + visual project management tools, IDE, autocomplete, auth, optional hosting, etc., all built in and integrated into one unified toolset.

posted by:   Nick     3-Jan-2025/9:39:01-8:00



Learning Streamlit with some SQLAlchemy is another way to get started in a simple and straightforward full stack Python ecosystem framework, without having to learn anything at all about web development:
    
https://streamlitpython.com
    
It's not as versatile as any of the other big toolkits we're discussing here, but you can get started building really useful apps with it, in a day, and it's a great little tool to have in your tool kit. Streamlit is really popular among the data scientist crowd - those guys who want to build front ends for their models, dashboards, etc., without devoting a lot of life to learning web development. It may be a nice way for you to dive right into creating some very useful ancillary applications and learning about Python ecosystem, without having to really spend weeks/months learning about web development.
    
Anvil will give you many more options and capabilities in the long run. You can find my basic tutorial about Anvil at:
    
https://pythonanvil.com
    
(~200 pages)

posted by:   Nick     3-Jan-2025/10:10:46-8:00



Thank you Nick,
thank you Kaj,
    
for both of your efforts to bring computing closer to the people that need to program the machines to do the things people need to do using computing power of those binary based idiots.
    
For me it is clear both of you have your point of view and it will be best to agree that there are issues to disagree on.
    
I am also glad to see that you both are happy to disagree in order to be more productive on your own turf.
    
I would like to see both of you working together again in the (near) future.
    
Let's make the best of a Rebol inspired 2025 and onward...


posted by:   iArnold     3-Jan-2025/13:56:41-8:00



Thank you for your kind sentiment iArnold - true genius can sometimes be hard to get along with ;) Happy new year!

posted by:   Nick     3-Jan-2025/14:14:08-8:00



Steve,
    
If you haven't seen SQLPage, take a look at my little intro (11 tiny applications in 29 Total Lines Of Code):
    
https://learnsqlpage.com/intro.html
    
It's perhaps one the simplest and most practical ways to build actually deeply powerful data management apps, and the benefit is that you'll focus on SQL and database use, which is really the most important part of any modern data management app. It runs on virtually any platform - I've even compiled it to run in Termux on Android - and it's a single tiny file install which provides UI, REST API, auth, and connectivity with Postgres, SQLite, MySQL, MSSQL and other RDBMSs. Even if you just use a few features of SQLPage, you can call Python and/or any other language code, and it's dead simple to integrate/connect apps built with it, into apps built with any other web development tools. Here's my quick start for developers:
    
https://learnsqlpage.com/sqlpage_quickstart.html
    
    
And here's a little Streamlit CRUD tool that may be useful for you to look into:
    
https://example-crud.streamlit.app
    
I haven't used that particular CRUD solution for anything yet, but it fits right in with the Python, SQLAlchemy, Streamlit set of tools, and that's enough to get you creating actual significantly useful full-stack apps without much ramp-up, and with real productivity possible. The benefit of that CRUD tool is that it's a drop-in full CRUD solution, which requires virtually no code to implement, and it will get you using SQLAlchemy, which it the default ORM to work with databases in the Python ecosystem.

posted by:   Nick     4-Jan-2025/12:50:49-8:00



Tutorial and examples for the streamlite_sql package above, created entire with GPT:
    
https://com-pute.com/nick/streamlit_sql_tutorial.zip
    
https://com-pute.com/nick/streamlit_sql_tutorial.txt
    
https://com-pute.com/nick/streamlit_sql_favorite_foods.py
    
    
Running live at:
    
http://server.appstagehost.com:8501
    
    
There's a zip file in the download in which I used GPT to convert all the Italian words to English.

posted by:   Nick     5-Jan-2025/22:53:52-8:00



All these other tools I'm listing are lighter weight alternatives to Anvil. Anvil is the big guns I use to handle most production projects, but it's nice to have some simpler alternatives for quick in-house projects.
    
You may also want to take a look at Jam.py
    
I've been digging more into Jam.py lately because it uses Flask, Bootstrap, jQuery, and any of the common RDBMSs. It's got all the features required to build HIPAA compliant applications (auth, automatic logging, and other commonly required security features), along with a no-code sort of 'builder' interface for creating most of the CRUD UI grid and form layouts & interactions - but you've got full access to all front-end design and logic, as well as back-end API functionality. So it's about as mainstream and transparent in the tooling it uses, but provides some really nicely productive visual tools for building database schema, the most common sorts of database queries, UI layouts, etc., which can be a big time saver, without limiting you to only using those tools. Jam.py is lighter and quicker to install than Anvil, and it can run on older and more resource constrained server and client machines with older browsers (the server even runs in Python 2.7).
    
The visual 'builder' is also part of the open source Jam.py package, which is not true of Anvil. Although you can build and/or edit Anvil applications purely with code and the open source anvil-app-server, without using any of the hosted tools at anvil.works, the visual editor, IDE, project management tools, and of course the hosting that are provided with an account at anvil.works, are not available as an open source project, and those make up a big part of why I enjoy using Anvil. So if anvil.works were to go out of business, I wouldn't have any issues managing existing projects, or building new ones with Anvil, but I'd certainly miss the conveniences of using the hosted editor and tools. I think jam.py may be a relatively close comparable product for most of the sorts of core work I do, with what appears to be a productive work environment that can be easily installed on any server, and which provides flexibility to use the tools such as Flask, Bootstrap, jQuery, and any common RDBMS, which are simple to use, and which AI code generators are extremely well adjusted to working with.
    
I'll likely do my next tutorial on Jam.py...

posted by:   Nick     6-Jan-2025/19:13:15-8:00



There are so many of these sorts of frameworks and tools in the Python ecosystem - each with their own features, benefits, and drawbacks. Anvil has proved itself to be staggeringly productive, largely because of it's hosted project management features. I trust it simply because it's risen to fully satisfy every single complex challenge I've thrown at it. It's handled some significantly deep projects of tens of thousands of lines of code each, involving daily development work for more than a year each, some of which involved extremely complex development and production deployment situations. It has satisfied the user experience requirements of the most demanding users imaginable - and it's been absolutely fantastic at helping me complete small projects dramatically quickly, without limits.
    
But Anvil-app-server is a big framework, and the hosted tooling is a big system that I pay to make work much more productive. It's suited for those year+ long development journeys which often involve many people at large organizations.
    
I like that jam.py orchestrates a quick and simple set of the absolute most popular and performant mainstream development tools, with many of the productivity features that Anvil enables. It has its own built-in ORM that the visual builder uses to generate schema and queries, and integrates automatically with built-in data grids (tables), forms, and other stock UI components, so you can do lots of fundamentally useful and powerful CRUD work with it very easily - without even having to write any code, but you can also just as easily integrate any SQL, SQLAlchemy, or any other server code, or extend the front end with any client frameworks and tools that aren't built into the system.
    
What's most important is that you can take any of the tens of thousands of lines of Python server code in Anvil apps, and move them to any of the other frameworks, and vice-versa. And because all the frameworks I use are web frameworks, integrating pieces of apps written in any framework, is as simple as connecting to REST APIs and including entire application interfaces in an iframe. We've done that for some simple interfaces in which we connected SQLPage UIs with existing database schema and web APIs, and just inserted the whole SQLPage app in an Anvil page. Doing that sort of thing with *any web framework, in any programming language, is just part of what makes all these frameworks so versatile. Colleagues work in pure SQL view and stored procedures, or they development some little piece of functionality, using any web framework, and it can get integrated into a larger application structure in just a few minutes.

posted by:   Nick     6-Jan-2025/19:58:25-8:00



Jam.py is at https://jam-py.com
    
The stock Jam.py demo is at https://demo.jam-py.com

posted by:   Nick     6-Jan-2025/21:07:51-8:00



https://www.youtube.com/watch?v=u_80ykoy8b0
https://www.youtube.com/watch?v=qkJvGlgoabU
    
You can see some more demo links here (for a forked version of jam.py):
    
https://github.com/jam-py-v5/jam-py?tab=readme-ov-file

posted by:   Nick     8-Jan-2025/7:29:25-8:00



Take a look at jam.py when you get a chance. It's already old, and in the past, I did utterly discount its capability and usefulness, because the person who created it was focused 100% on achieving commercial success with the low-code visual 'builder' features of the framework, but it's actually a wide open package of all the most common mainstream web development tools (which can use any Python code & any common RDBMS on the back end, and Bootstrap & jQuery on the front end (it doesn't get more mainstream and AI code-generation friendly than that)).
    
Jam.py can be implemented almost anywhere because it runs on virtually any server (not only on any Python 3.x version, but even on old Python 2.7) and the front end can run in basically any client browser (Bootstrap and jQuery are the most universally supported/used front end frameworks, and the ones best supported by AI code generation). And really importantly, the whole framework is *super performant because it relies on established performant tooling as a base, even on resource constrained hardware.
    
The hyped low-code facade which is presented as the whole purpose of jam.py, previously kept me from being too interested in it. I expected it would be a limited tool, like all the other low-code tools, but it's actually an easily *extensible and deeply capable framework which provides many of the benefits of Anvil: web based IDE for the entire full-stack development process, with visual UI and database schema builders, auth system, ORM, API endpoint interface, etc., but with even some additional features that help to satisfy HIPAA and other tough compliance/security requirements more easily, such as automated audit trail logging (that feature alone can be a massive productivity boost, when it's needed).
    
I initially categorized the whole jam.py framework, based on it's marketing focus on the visual 'builder' tooling, as just another no-code/low-code gimmicky interface which is better replaced by some simple pure code frameworks, but the builder is actually pretty brilliantly executed - certainly an especially capable no-code/low-code interface, for building the most common time-consuming UI components such as grids/tables and forms, with automated full stack CRUD implementation (i.e., with automatic database schema and query generation, including foreign keys, joins, automated filter generation, automatic sorting based on automatically generated UI headers, inline editing (full circle CRUD) in UI grids, automatic form generation with fully automated CRUD functionality implementation, etc.). It's a slick, simple system with surprising capability/features.
    
    
The big thing for me is that jam.py looks like it'll provide all the most important framework and IDE tooling, to speed up everything in the full stack development process, with the whole toolchain and all development cycles living entirely in the browser. And unlike Anvil, the IDE is entirely open source and installable in about a minute. With Anvil, although there's nothing which can't be accomplished using the stand-alone anvil-app-server and pure code, it's the commercially hosted Anvil IDE with visual UI layout, internal ORM schema tools, file/project management, version management, single-file project cloning, deep autocomplete features, etc., which dramatically boost productivity. Along with the commercial IDE being a big part of what makes Anvil productive, which is the only real thing I wish I could change, the open source Anvil app server is also heavy, it requires lots of resources, involving not just Python, but Java, Postgres for internal operations, a large server application download (written in Clojure/Java), etc.
    
Jam.py is a *much simpler install, the interface is super simple, and I genuinely think it may actually be superior for UI datagrid development (which is a huge part of most CRUD work), but most important, it exposes direct interaction with server and client code - so you can connect/integrate anything in the Python ecosystem (any library, ORM, framework, SQL, OS interaction, etc. on the server), and customize any HTML/CSS/JS on the front end - all within a really productive framework that has all the built-in features needed to handle fundamental CRUD, navigation, and other constantly required, common development tasks, easily.
    
Jam.py's ORM works immediately out of the box with Sqlite, Postgres, MySQL, MSSQL, Oracle, Firebird, and others, and there's even a tool to import existing MS Access projects. As with Anvil, you can choose to use or not use any of the built-in integrated tools - for example use SQLAlchemy or any other ORM, instead of the built in ORM, and integrate any third party UI components in the HTML/CSS/JS world (or roll your own, however you prefer). Pick any JS grid or charting library, for example, that has any special visualization feature you want to take advantage of, and it'll integrate as easily as it can with any other plain vanilla web development environment. In that case, the jam.py framework can simply work as an IDE with all sorts of needed features such as auth. Add a 3D gaming library and integrate it with the jam.py database system, page layout styling and navigation, and just use the jam.py IDE and framework to deploy the project. That's a big part of what's made Anvil useful to me - development and deployment toolchain features. Jam.py is much simpler, but I think it provides fundamentally useful toolchain features.
    
If you take a look at the examples on the jam.py Github page, you'll see it's capable of producing deeply capable database applications, with professional and complex layouts, with all the bells and features you'd expect from any of the most complicated frameworks, using Material design styling, lots of pre-built swappable UI style choices. It looks plain Jane great for database apps, and doesn't involve any weird approaches to development, for example, like Streamlit's oddball state management approach.
    
Of course, I've only spent a tiny amount of time with jam.py, but I know all the components it uses, and they are perhaps the most popular tools that make up all of the mainstream Internet, they are well-understood, and have continued to be trusted for many years, because they're technically successful and reliably effective, able to integrate with other mainstream tools, more deeply documented for production use than virtually anything else in the software development industry, etc., so I'm building most of my expectations on those things. Like Anvil, app,py is a framework that orchestrates the use of those common tools, and provides lots of basic built-in functionality for CRUD, UI, and the most important capabilities that make up any sort of application which accomplishes useful end goals in the real world.
    
I'm going to dig in hard to this framework, even though it doesn't seem to have been very successful. It seems to me that serious developers have likely discounted it wholesale because the marketing materials present it as a no-code/low-code tool. The low-code tooling is actually uniquely usable (which is a joy for little projects, and makes it potentially accessible for less technical end-users in an organizational project which involves multiple teams), but what really catches my attention is the development tooling it's built on, the workflow productivity features it implements, and the way it exposes and integrates pure code - especially the particular tools it uses (pure unfettered Python on the back end, and a beautifully implemented Bootstrap/jQuery/HMTL/CSS/JS environment on the front end, with super productive CRUD tools built in).
    
It's going to take a while to fully explore and document the whole system, especially in production work with specialized requirements, but I'll make tutorials along the way. I'm hopeful that this could be a viable alternative or even an improvement upon the Anvil ecosystem I've come to trust, for its capability and ability to integrate with other mainstream tools. But don't let the potential deepness keep you from checking it out. A non-technical new user (person without any coding or development experience at all) can begin making really useful database apps with it in less than an hour, using just the visual tools.
    
I'll post tutorials as I make them...

posted by:   Nick     9-Jan-2025/16:17:05-8:00



A quick informal intro to the no-code schema/ui grid builder in jam.py:
    
https://youtu.be/7UZGPX5dJYo

posted by:   Nick     11-Jan-2025/0:27:49-8:00



That's the sort of thing a non-technical user with absolutely no coding experience can learn to create with jam.py in one quick tutoring setting. There's certainly a place in some projects where it's helpful for non-technical team members to be able to do that sort of simple CRUD work, which can be integrated into a project (statisticians, doctors, IT team members, etc.), and this sort of simple CRUD builder can help save developers' brain cells and schedules for more challenging parts of projects.

posted by:   Nick     11-Jan-2025/0:36:06-8:00



I'll make some more quick videos about jam.py as I get real projects implemented. The intro above just shows how to install and create basic schema and grids. At first glance, I can see that it doesn't look like there'd be any reason to have any interest whatsoever in jam.py, because it appears to be only a super basic gimmicky little no-code tool. That's why I never really cared to look into it, but what's striking when you dig in, is not just how fast the simple no-code builder works, and how much actually useful CRUD capability it enables (with surprisingly functional built-in UI grid and form building features, automatic schema building features, etc.), but how it can be *integrated with limitless pure code capabilities that are built into the conspicuously simple toolchain and workflow it enables.
    
It's the nature of the underlying tooling, the full power of the entire Python ecosystem, the full power of the HTML/CSS/JSS/Bootstrap/jQuery/etc ecosystems, the full power of the RDBMS ecosystems, etc., it supports (with a built in ORM, and/or support for any other ORM, or pure SQL, etc.), the auth and audit log systems, the project packaging and deployment systems, the in-browser editors for front-end and back-end code, with rudimentary auto-complete, etc., all fully exposed and easily composable with pure code that can extend to any complexity.
    
The fact that AI code generation tools work better with the *particular choices of fundamental components in this framework (Python, Bootstrap, jQuery, RDBMS, SQL, etc.), than with any other software development components I've seen, and how that particular mix can be integrated with virtually any other web development tooling ... all that is what makes jam.py so potentially powerful and hyper productive.
    
Many of the features above are of course what make any web development framework based on Python so potent (an elegant mix of Python on the back-end and web UI on the front-end), and why those frameworks have all become so popular and commonly used in large production projects, in every industry. Jam.py just has a particularly interesting mix of simplicity, and ridiculously fast productive potential for CRUD work, which eliminates the time consuming drudgery of tying UI grids and forms to back-end database schema and logic, and orchestrating/integrating the most common RDBMS tooling (which can be the *most common time consuming pieces of many big data management projects, over and over again).
    
The sort of simplicity jam.py enables, built on top of deeply useful data management power, connectivity with other systems, the ability to integrate endlessly capable 3rd party tools, simple built-in security solutions which are required for virtually any commercial work, etc., is what I dreamed of being able to take part in with Rebol, but which never even came close to becoming a reality.
    
Anvil has been a beautiful, enjoyable to use, and endlessly capable tool to enable really fast work, and to implement solutions of all kinds, for every sort of varied problem which has been presented to me in many extremely complex projects over the past few years. And it looks like the tiny little work environment and critically useful set of features in jam.py may enable all the same sorts of benefits, with some really powerful *time-saving capabilities that can be used in virtually any sort of project. The simple little CRUD grid, ORM, auth, logging, dead simple deployment, and ability to spin up & connect multiple projects all around the same database, on the same server (or different servers), etc., make it easy for even totally non-technical non-developers to take part in improving workflows that otherwise would involve lots of code and toolchain complexity. From what I've seen so far, Jam.py's mix of no-code features is actually amazingly flexible, powerful, and easy to integrate with pure code solutions, which should be able to handle development work of any type, in any domain, with a simple to achieve professional look and feel, and all the basic CRUD work simplified, with all the benefits of web UI and RDBMS at the core, and the ability to tie into Python CUDA and everything else in the massively powerful Python ecosystem, and the massively powerful ecosystem of web UI also.
    
That's pretty damn cool, for a tool which non-technical users who have absolutely no experience writing code, can learn to install and begin building useful CRUD interfaces with, in just a few minutes. The fact that that tool not only doesn't limit projects to the well curated, useful, and dramatically simple to use no-code capabilities it enables, but actually enables easy connectively with all the most powerful and deeply useful pro development tooling (AI code generation included now), available in the industry, is what makes this thing so potentially powerful and productive.

posted by:   Nick     11-Jan-2025/10:51:36-8:00



I'll make some more quick videos about jam.py as I get real projects implemented. The intro above just shows how to install and create basic schema and grids. At first glance, I can see that it doesn't look like there'd be any reason to have any interest whatsoever in jam.py, because it appears to be only a super basic gimmicky little no-code tool. That's why I never really cared to look into it, but what's striking when you dig in, is not just how fast the simple no-code builder works, and how much actually useful CRUD capability it enables (with surprisingly functional built-in UI grid and form building features, automatic schema building features, etc.), but how it can be *integrated with limitless pure code capabilities that are built into the conspicuously simple toolchain and workflow it enables.
    
It's the nature of the underlying tooling, the full power of the entire Python ecosystem, the full power of the HTML/CSS/JSS/Bootstrap/jQuery/etc ecosystems, the full power of the RDBMS ecosystems, etc., it supports (with a built in ORM, and/or support for any other ORM, or pure SQL, etc.), the auth and audit log systems, the project packaging and deployment systems, the in-browser editors for front-end and back-end code, with rudimentary auto-complete, etc., all fully exposed and easily composable with pure code that can extend to any complexity.
    
The fact that AI code generation tools work better with the *particular choices of fundamental components in this framework (Python, Bootstrap, jQuery, RDBMS, SQL, etc.), than with any other software development components I've seen, and how that particular mix can be integrated with virtually any other web development tooling ... all that is what makes jam.py so potentially powerful and hyper productive.
    
Many of the features above are of course what make any web development framework based on Python so potent (an elegant mix of Python on the back-end and web UI on the front-end), and why those frameworks have all become so popular and commonly used in large production projects, in every industry. Jam.py just has a particularly interesting mix of simplicity, and ridiculously fast productive potential for CRUD work, which eliminates the time consuming drudgery of tying UI grids and forms to back-end database schema and logic, and orchestrating/integrating the most common RDBMS tooling (which can be the *most common time consuming pieces of many big data management projects, over and over again).
    
The sort of simplicity jam.py enables, built on top of deeply useful data management power, connectivity with other systems, the ability to integrate endlessly capable 3rd party tools, simple built-in security solutions which are required for virtually any commercial work, etc., is what I dreamed of being able to take part in with Rebol, but which never even came close to becoming a reality.
    
Anvil has been a beautiful, enjoyable to use, and endlessly capable tool to enable really fast work, and to implement solutions of all kinds, for every sort of varied problem which has been presented to me in many extremely complex projects over the past few years. And it looks like the tiny little work environment and critically useful set of features in jam.py may enable all the same sorts of benefits, with some really powerful *time-saving capabilities that can be used in virtually any sort of project. The simple little CRUD grid, ORM, auth, logging, dead simple deployment, and ability to spin up & connect multiple projects all around the same database, on the same server (or different servers), etc., make it easy for even totally non-technical non-developers to take part in improving workflows that otherwise would involve lots of code and toolchain complexity. From what I've seen so far, Jam.py's mix of no-code features is actually amazingly flexible, powerful, and easy to integrate with pure code solutions, which should be able to handle development work of any type, in any domain, with a simple to achieve professional look and feel, and all the basic CRUD work simplified, with all the benefits of web UI and RDBMS at the core, and the ability to tie into Python CUDA and everything else in the massively powerful Python ecosystem, and the massively powerful ecosystem of web UI also.
    
That's pretty damn cool, for a tool which non-technical users who have absolutely no experience writing code, can learn to install and begin building useful CRUD interfaces with, in just a few minutes. The fact that that tool not only doesn't limit projects to the well curated, useful, and dramatically simple to use no-code capabilities it enables, but actually enables easy connectively with all the most powerful and deeply useful pro development tooling (AI code generation included now), available in the industry, is what makes this thing so potentially powerful and productive.

posted by:   Nick     11-Jan-2025/10:51:39-8:00



I say all this, fully expecting that none of it will matter after a few more years of AI evolution. For now, it's a core part of my life, but I don't expect software development to be anything like it has been historically, within my lifetime.

posted by:   Nick     11-Jan-2025/11:28:43-8:00



I'm sure that when looking a tool like jam.py, or even Anvil, from the outside - from the perspective of a developer coming from other disciplines, the obvious first response must be 'haha, just as I thought, these tools are only about accomplishing the simplest sorts of routine CRUD work', but it's exactly the opposite. Simple database interactions are just small, essential parts of more complex and unique problem solving development efforts which need to be tackled in every complex project.
    
For example, one small piece of one of the applications I built this year was an import routine which performed all sorts of logical evaluations and updates to the database, based on clinical data that was being imported. Just one part of that import process made use of statistical matching algorithms, devised by a statistician who used a well-known python library, to look through the database for demographic information which could have potentially been entered slightly wrong, and wouldn't have been found using any sort of exact, or ilike types of pattern matching, or even complex regex evaluations, to find values which had been entered informally in a previously existing database. So, for example, a social security number that had several characters wrong, a birthday that was a few days off from the correct imported date, and perhaps a phonetically misspelled last name, etc. a match could be reliably given a similarity score, so that a human doctor could be notified that it should be evaluated. That code was written by a team member who implemented that set of features, totally outside of the development environment in which the database was being queried. We integrated that routine into a larger logical scheme in which imported lab results were used update existing diagnoses, to update demographic, mrn, community, and other information existing in the database, based on a complex set of rules which encompassed a massive amount of the work the doctors had to do previously by hand to evaluate and make changes (especially to diagnoses), based on previously existing conditions, and new lab results.
    
That was just one piece of innovative work in a much larger application, which was a requirement, and which needed to be solved and integrated exactly as the requirements were specified, within the exact IT, security, existing organizational and technical infrastructure already plays, along with legal and other compliance requirements.    
    
That innovative work still involved lots of basic UI data grids, forms, sequel queries, schema structure, etc. in the new database system. Those things are just the structural tools required to work with the data - they don't limit what's able to be done in the innovative work pieces.    
    
Tools like Anvil and jam.py simply enable that essentially required, slogging, time consuming CRUD work to be completed really quickly, so that the innovative work can get attended to with a much greater percentage of the development time. Having great CRUD tools available, also means that the user experience expectations of all the end users can be satisfied without having to derail the innovative work, with lots of detailed changes to UI interactions and layouts, which are expected to be satisfied 100% as requested. When a user asks, for example, 'we want the table headers to appear in blah blah blah way', and for example table columns to appear in this dynamically generated order, based on logical conditions, and for certain panels and interfaces to appear when certain events occur, or when certain situational conditions are met, etc., and for every font size, style, margin, padding, etc. layout spec to be implemented exactly as requested - all that can be handled immediately, and without limitation, and with virtually no effort. And, for example if some data and analytics team member wants to interface with the database, create stored procedures in the database for you to implement on the front end, etc., there's no friction whatsoever.
    
That's what the total package of Anvil has enabled, and it's enabled much much more. There just aren't any limitations when you're using tools that can connect to other massive ecosystems and existing patterns of work. If you want to implement any beautiful animation, or 3D augmented reality interface, or integrate any existing AI API on the back end, etc. those things are all built to work *directly with the tools that Anvil, jam.py and other frameworks are built on. The point of the frameworks and ecosystems I talk about here is just to tie those things all together, and orchestrate them in ways that reduce workloads dramatically, so you can focus on innovative problem solving. I know they're easily dismissed, when you look at them and go 'this doesn't look like it does anything useful for innovative work', but it's the exact opposite situation. You want your time to be freed up to perform that innovative unique work, and these tools enable productivity, for the most common, constantly occurring sorts of drudge work that need to be included in any application which does something useful and unique.
    
Those sorts of problems aren't solved by creating new programming languages. They're solved by making it extremely easy to use, integrate, and orchestrate deeply entrenched, intelligently designed, otherwise complicated to use tools (such as RDBMSs), which are ubiquitous, well-trusted, extremely performant and reliable, and which are deeply documented, well understood, and in place in every organization in ways which will simply not change, so that getting work done of any imaginable sort, is easily achievable.

posted by:   Nick     11-Jan-2025/14:39:29-8:00



Here are some off the cuff comparisons:
    
Jam.py uses an interesting proprietary way of accessing the database, with objects that can be referenced the *same way* on the server with Python code, and in the client with JS code. It solves a lot of problems in a similar conceptual way as Anvil, but with very different coding mechanisms than in Anvil and other frameworks. It's really interesting, coming from Anvil, because I can see the author's motivation for engineering the whole system the way he did, but not sure how it will sit with others who don't have any interest in learning a proprietary framework system (even though it connects and fully exposes common open source pieces throughout).
    
The biggest problem with mainstream full-stack web-based software development frameworks is that you have to pass complex objects back and forth between different front-end and back-end contexts, and typically deal with the data in at least 3 programming languages (front-end: JS and/or some compiled language and/or framework tooling; back-end: whatever language is used for logic/libraries/OS integration, etc.; database: SQL), and usually at least 1 data serialization format (most often JSON now, but could be XML, a proprietary ORM dialect, etc.).
    
SQLPage says screw all that and just do everything with SQL, including a dialect to build UIs and APIs directly within SQL queries. Anvil says screw all that and just do everything with Python, including an ORM and visual UI layout dialect and project management tools. Jam.py says here are some objects that you can use to connect database queries with the most common native back-end and front-end languages/frameworks, and some simple no-code tools that eliminate the need to write much of the code needed for the most common interactions.
    
SQLPage is certainly the simplest solution, because it cuts out basically all the complexity of the traditional full stack machinery in frameworks like Ruby on Rails, Django, etc. (or name the most common full stack framework for any major programming language).    
    
Anvil cuts down complexity for the developer by using lots of heavy machinery to simplify every part of the development process, including every layer of the full stack, with simple Python interfaces to complex objects.
    
Jam.py tries to embrace the most popular tooling, and connect/control each part of the stack with the simplest possible lightweight integrated native Python, JS, ORM, and visual tools for each job, with one unified proprietary data access model that works the same throughout the stack.
    
Other frameworks typically provide a way to automate database interactions with an ORM, and then provide some sort of templating engine to generate integrated front end layouts that play nicely with the back end ORM (and that's all controlled by logic in the primary programming language).
    
Choose the approach that suits you best. SQLPage, Anvil, and Jam.py are some of the best solutions I've seen in 40+ years writing software. Anvil is still my favorite, for the largest variety of potential uses, it's ability to integrate with virtually any other language/system tools, and very productive project management tooling. SQLPage is perhaps the simplest possible full stack RDBMS based framework, which makes it possible to start small and do some pretty dang big things without ever doing much heavy coding (and it is thoroughly extensible, although it relies entirely on server rendering). Jam.py has particularly productive CRUD features, and exposes universally used native tooling, if you're willing to learn a simple proprietary database abstraction layer which works the same in JS and Python.

posted by:   Nick     11-Jan-2025/19:43:44-8:00



I expect that Jam.py will be incredibly productive and universally useful for a massive variety of projects, but you really need to be willing to use Python, JS (jQuery, Bootstrap, etc.) to extend it, and the little proprietary data access model to do any serious work with it at all.

posted by:   Nick     11-Jan-2025/19:52:55-8:00



I expect that Jam.py will be incredibly productive and universally useful for a massive variety of projects, but you really need to be willing to use Python, JS (jQuery, Bootstrap, etc.) to extend it, and the little proprietary data access model to do any serious work with it at all.

posted by:   Nick     11-Jan-2025/19:52:57-8:00



To be successful at software development or any other challenge, I think it's important to search out the successful accomplishments which others have achieved, and discover how those accomplishments were achieved, even if the approaches to the solutions are foreign to you (or should I say, *especially if the approaches to those solutions are foreign to you). In most cases, you'll find that foreign concepts don't stay foreign for long, once you dive in and practice new ways of doing things.
    
One of the important things to watch out for in your journey is statements such as 'you are going to have problems doing ____, with that solution', from people who've never used the solution or solved the problems you're hoping to solve. Very often, people get stuck in their own limited way of thinking, based on experiences they've had with the environments and solutions they know. I see it all around, every day, in every domain of human activity: people expect that the limitations they're accustomed to, must be limitations which also haven't yet been worked out in other environments.
    
I've found that 'here's how this solution enables ___ that you're trying to achieve' is generally better to search for. Look around, and if you see that people are *actually achieving end goals* that you can't achieve, because you're stuck trying to work out some roadblock in the environment you currently know, then maybe it's a good idea to see how others are actually achieving those goals - because perhaps the roadblocks you're accustomed to simply don't exist in other environments, paradigms, and patterns of work.
    
I've known several self-made billionaires, and their perspective about challenges in life is never the same as that of people who have only ever struggled financially. The financial problems which most people experience often require an utter shift in perspective about how everything around them works - and in order to surpass their challenges, the experience and knowledge which self-made billionaires have, about what's possible and what works in the world around us, might actually be helpful in accomplishing new goals.
    
My perspective is to look at what works, in any domain. When I want to sound like a particular guitarist, I learn the exact music they play, and how they play it, and I buy the exact gear they use, etc. I don't try to make a rig and the techniques which Eric Clapton uses, try to work when playing music by Eddie Van Halen. To sound like Eddie Van Halen, I do what Eddie Van Halen did, because he has already *successfully worked out* everything he plays. That doesn't mean that other players don't have great insight about making music. It just means that they don't have the answers about how to sound like Eddie Van Halen. And the same is true if I want to play like Joe Pass. Being an expert in Eddie Van Halen's style does not help understand how Joe Pass harmonizes solo arrangements. You have to study the utterly different paradigm of Joe Pass's approach and techniques, to understand how to play like that.
    
Similarly, if a technological solution actually exists in the real world, which seemed nearly impossible to achieve using previously available tools, I don't try to accomplish the same goal from the ground up using other tools. I learn exactly how the working solution was created, and I explore the tools and techniques which enabled the new accomplishment. Often, it takes learning about entirely new paradigms, and entirely new environments/approaches to solving problems, but along the way I can expect to learn that limitations I'd experienced in previous paradigms have been surpassed, or that roadblocks I've gotten stuck on previously, have been eliminated within the fabric of other deeply engineered solutions.
    
Anvil hasn't let me down because the creators took into consideration worlds of experience, regarding all the sorts of things developers have had problems with in the past, and implemented solutions based on well established best practices from many successful paradigms. Limiting yourself to old ways of looking at solutions to problems, without exploring, experiencing, or fully learning about the solutions which have *actually been successful*, is something I'd suggest avoiding, if the goal is to move forward successfully.
    
Look around, and if what you want to do is actually *being accomplished* by anyone, start by learning how they've achieved the goal, then embrace, explore, dive in and internalize any new paradigms which have enabled that success, and work from there. A life built from learning to embrace foreign concepts and approaches to solutions is one where new concepts are never foreign for long.

posted by:   Nick     12-Jan-2025/15:09:49-8:00



I've spent so many thousands of hours of my life working with so many hundreds of software development tools and paradigms. It's always amazing to see how challenges shift in different environments. In the end, I always evaluate which tools enable the greatest overall productivity and achievement, in the context of living with the best possible quality of life, and making the greatest possible positive difference in the world and the lives of the people around me. Anvil has far and away outperformed any other software tool I've ever used, by those measurements.

posted by:   Nick     13-Jan-2025/0:22:03-8:00



Name:


Message:


Type the reverse of this captcha text: "e m a n - r e s u"



Home