Home   Archive   Permalink



AI and modern tools

I read this article yesterday by Blaise Aguera Y Arcas and Peter Norvig:
    
https://www.noemamag.com/artificial-general-intelligence-is-already-here/
    
Years ago I had a meaningful interaction with Peter Norvig about Rebol. He was interested in Rebol, but chose to use Python at Google because Python was open source. That was around the time I started pushing publicly for open source Rebol ;)
    
This year my universe has changed profoundly, and not just as a developer. My expectations about how technology will influence the future of humanity's condition has come into clear focus. And much of that future is already immediately in front of us.
    
During the past few months I've been producing some good sized projects for the medical field with Anvil (billing and all sorts of operations automation projects). The projects are the sorts that would have each taken months in the past, using any tools, including Rebol - and in fact, similar apps have been created by other teams of developers, for my clients, at many multiples the time and cost of my current work. The sorts of pressures in these projects would have been stressful, to say the least, in the past, but with the help of AI, my approach to generating solutions, working out problems, organizing ideas, integrating legacy code, learning about new topics, communicating with clients, etc., has become much like the sort of child's play I dreamed of could have been possible in a 'less complex' Rebol world. I have been gobsmacked by the deep capability of GPT (and even, briefly, by some of the other open source tools recently), to perform reasoned work - and not just in coding. I use it to prepare documents such as quotes, and even to help organize some common human communication tasks. I use it to help understand and prepare learning paths and tool choices that are best suited to solving the full scope of a problem, and then I use it to drill down into understanding the details of each step of a solution. I regularly use it to quickly summarize thousands of lines of legacy code which might otherwise take me dozens or hundreds of hours to familiarize. Then I use it to help with the details of integrating that code with my own hand written code - and I use it to generate a portion of that code automatically and instantly. I've learned to interact with GPT especially in ways that have just blown me away, again and again multiple times a day, in terms of just how smart, capable, and truly *useful it already has become, in just matter of months. My world has a dramatically simpler and easier outlook, and not in some imagined future. The reduction of complexity that I used to dream of has become a stark reality - and it hasn't come so much as a result of programming language choice (although Anvil, with the full power and connectivity of Python and JS, has been an profoundly productive, capable, and pain free choice, which has been easily accepted and integrated in demanding commercial environments). It's just as much currently possible because the current level of AI capability reduces the complexity involved in approaching any project, to a level that was previously unimaginable. I'm enjoying this work in a way I never have before. It's awesome to have a pair programmer for $20 a month who knows more about everything than any single human (or even a team of humans) ever could, who's available 24/7/365, who never tires, is always polite, and works 100,000x faster than any human (or team of humans) could.
    
I think the fact that I'm using AI to write code, to gain understanding and facility with problem scope and solution choice, etc. (as opposed to how much of humanity is currently using it to do 'parlor tricks') - really using it to explore and integrate reasoned solutions to problems, at every layer, from broad research which involves understanding a problem space, to choosing tools for a solution, to grinding out detailed code and integrating code with existing legacy code - that constant process has proven daily that AI is already amazingly capable of reason. We're just at the beginning, and my work still requires a human to orchestrate and integrate any work that is done by AI, but it's been an absolutely astounding journey in 2023. I'm equally encouraged and terrified of what is immediately around the corner for humanity...

posted by:   Nick     11-Oct-2023/6:43:22-7:00



One of the topics addressed in the article above is 'in-context learning', and what I've witnessed in that regard, has been staggering at times. A repeated task that I come across requires learning an API required to integrate solutions. Whether I'm dealing with a new API for a tool, or a third party REST API which enables a developed software solution to connect with a third party data source (for example, reports from account transactions performed by a payment processor, information obtained about individuals from a background check service, integrating in-house healthcare data with an hl7 interface, etc.) - no matter the problem domain or technical specifics - GPT has been amazing at helping to perform integrations. I regularly simply paste in the documentation to some unknown API and start by asking questions about how the API works. Often the documentation will contains hundreds of functions, arguments, and examples - and GPT just immediately understands how the API works and can often provide better explanations and more specific answers about how to perform a particular data transformation, than any human written tutorial could. Then, it can help facilitate not just the logic required, but the specific working code required to connect inputs and outputs of existing legacy code, or new code which needs to be written, to work with the new third party API/tool. It's knowledgeable enough about the world, and other tools, especially the tools I'm currently using, that you can simply ask questions, paste in errors from a console (or whatever tool stack you're using), and work iteratively at speeds, and with a facility, that was never previously possible. If GPT knows your development tooling, it can explain error messages (more than any single human could ever know), and immediately provide explanations and solutions (working code) which eliminate the error. As you work iteratively, you can ask questions about logic flow, and converse about larger tooling decisions, all in-line with the work flow, just as if you're speaking with an extremely knowledgeable human.
    
Of course it's not anywhere near perfect. I have seen GPT hallucinate (I never trust it to produce necessary data sets), but the overall increase in production, when working with tools and languages that it knows deeply (those with deep documentation and millions/billions of lines of code, tutorials, etc. available online), is staggering.

posted by:   Nick     11-Oct-2023/7:09:47-7:00



One thing that's interesting to me is that I regularly hear people describe their experience with current AI as frustrating or awkward, to the point of being useless. In the demos that I've seen online, with people requesting something along the lines of 'build me a shopping cart to display my products', the results are often embarrassing, but when I discuss the specific arguments, logic, and return values of a function - how conditional evaluations, loops, calls to other functions, and the path which values take during the journey through front end, back end, database, etc., affect required data transformations - with that sort of approach, GPT is outrageously helpful. Of course it also really matters which stack a developer is using, and of course what sort of work a developer is performing. But I think that is simply a matter of deep documentation, available training data, etc. The fact that the current level of capability exists with those stacks which have deep documentation and an enormous base of training data available, have proven what is possible, and the fact that it can already do such a good job with in-context learning provides just a glimmer of insight into what is coming in the near future. I'm constantly enthralled, and we're witnessing just the first step in a trillion mile journey...

posted by:   Nick     11-Oct-2023/7:27-7:00



Generative AI is NOT general AI. Whoever thinks that, I can't consider competent.

posted by:   Kaj     11-Oct-2023/17:40:07-7:00



Kaj, in all our discussions here, you've disparaged some fantastically talented people. Why such strong vitriol? Really, you can't consider Peter Norvig and Blaise Aguera Y Arcas competent because of some assertions they put forth in a single article? I thought the article was well written and made some great points. You would consider a point of view in one article a counter to all they have achieved?
    
I'm curious if you are actually *using any AI daily to produce useful output? Are you following developments closely? Have you done research comparable to that produced by Microsoft?: https://www.microsoft.com/en-us/research/publication/sparks-of-artificial-general-intelligence-early-experiments-with-gpt-4/ ?
    
I'm just speaking from my own experiences in this topic. I don't have much interest in getting caught up in a fruitless argument about whether to call what has begun 'general AI' or even to discuss whether LLMs should be considered intelligent. I'm interested in tools that are productive, and which lead to better quality of life. The current crop of AI has certainly fallen into that category, for the purposes I've described above. And from what I see happening, I expect progress to continue to ramp up quickly. The fact is, LLM technology is tremendously useful to me, as it exists right now. My clients are benefitting, their clients are benefitting, I'm benefitting, everyone in my sphere of influence is benefitting, and so are millions of other people. I don't have any need to argue about those results - I just hope to share the benefit of my recent experiences with anyone who may find it useful. If that's no one in this environment, then I'm happy to simply express some personal joy here, and live happily with those benefits, among others whose lives have been similarly improved :)

posted by:   Nick     11-Oct-2023/21:23:27-7:00



Kaj, in all our discussions here, you've disparaged some fantastically talented people. Why such strong vitriol? Really, you can't consider Peter Norvig and Blaise Aguera Y Arcas competent because of some assertions they put forth in a single article? I thought the article was well written and made some great points. You would consider a point of view in one article a counter to all they have achieved?
    
I'm curious if you are actually *using any AI daily to produce useful output? Are you following developments closely? Have you done research comparable to that produced by Microsoft?: https://www.microsoft.com/en-us/research/publication/sparks-of-artificial-general-intelligence-early-experiments-with-gpt-4/ ?
    
I'm just speaking from my own experiences in this topic. I don't have much interest in getting caught up in a fruitless argument about whether to call what has begun 'general AI' or even to discuss whether LLMs should be considered intelligent. I'm interested in tools that are productive, and which lead to better quality of life. The current crop of AI has certainly fallen into that category, for the purposes I've described above. And from what I see happening, I expect progress to continue to ramp up quickly. The fact is, LLM technology is tremendously useful to me, as it exists right now. My clients are benefitting, their clients are benefitting, I'm benefitting, everyone in my sphere of influence is benefitting, and so are millions of other people. I don't have any need to argue about those results - I just hope to share the benefit of my recent experiences with anyone who may find it useful. If that's no one in this environment, then I'm happy to simply express some personal joy here, and live happily with those benefits, among others whose lives have been similarly improved :)

posted by:   Nick     11-Oct-2023/21:23:28-7:00



I don't see where I am "disparaging fantastically talented people". That is certainly not my intention.
    
On the other hand, there is a lot of unwarranted hype around AI. To the point that even some talented people get carried away, for example by going outside their area of expertise. Always when there is such an unbalanced situation, I try to pull it in a more balanced direction. Usually by reminding about other talented people with different opinions.
    
In the other, long thread where you introduced AI into the discussion, I linked you to an essay by Noam Chomski and others. Please reread that for an expert opinion about why generative AI is not general intelligence.

posted by:   Kaj     12-Oct-2023/6:55:41-7:00



"Whoever thinks that, I can't consider competent" - that came across as disparaging towards the authors and anyone who agreed with the ideas presented in the article. I apologize if I misunderstood.
    
I've read the Noam Chomsky article. I think the crux his point is 'such programs are stuck in a prehuman or nonhuman phase of cognitive evolution. Their deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence.' I don't care to argue that assertion.
    
Whatever ends up being the case with future developments in AI, or the whatever the nature of misunderstood emergent capabilities is (a really interesting topic!), Noam Chomsky did write specifically in his article that LLMs may be useful 'in some narrow domains (they can be helpful in computer programming, for example, or in suggesting rhymes for light verse)' - and *that is the purpose of this topic. I'm thrilled with what LLMS can currently do to reduce the complexity of my work load, to increase my productivity, and to improve the quality of life for me and others whose lives those capabilities touch. The limits of neural nets to perform 'reasoning' is currently useful in the context I've described, and the benefits of those capabilities have not been trivial to me.

posted by:   Nick     12-Oct-2023/18:03:35-7:00



BTW, without any disrespect for Dr. Chomsky, as exhaud.com has pointed out, 'Noam Chomsky is, according to Wikipedia, a linguist, philosopher, cognitive scientist, historian, social critic, political activist, a father, a 91-year-young gentleman and NOT a programmer or computer scientist, however he [had] a huge influence in the theory of Formal Languages'. I'm inclined to consider the 'expert opinions' of Blaise Aguera Y Arcas and Peter Norvig to be at least as worthy of consideration in regards to topics related to computer science and artificial intelligence.

posted by:   Nick     12-Oct-2023/18:08:31-7:00



Whoever had written that article, I think it's interesting food for thought. In the end, my point here was to express how useful this technology has been to me this year. I think most people miss how useful and productive it can actually be, even in it's current state, when used well. It does take some time to learn how to interact with it effectively, and clearly it's much more capable in some problem domains, or when applying it's capabilities to some tools over others, etc., but for the time I've spent using it this year, the investment of effort has paid off well.

posted by:   Nick     12-Oct-2023/18:14:59-7:00



The general use of AI in this day and age is in generative AI...

posted by:   Arnold     13-Oct-2023/1:38:40-7:00



Nick, we define intelligence as the abilities of the human brain. How is a cognitive scientist not an expert in that field?
    
Further, as a linguist, Chomsky is an expert in the field of the approach that generative AI is taking: Large Language Models.
    
Not only just a linguist, but the major force in the theory of formal languages. Which is a model for all languages, very much including computer languages. Have you read the site of my Meta language? I refer to Chomsky multiple times to explain the theoretical base of Meta.
    
I am happy for you that generative AI works so well for you, but I would rather not see my colleagues in my domain sell it to the world as something it's not. Which even governments and international organisations understand to be very dangerous.
    
I reckon it works well for you because you do well-understood automation work. Generative AI cannot do my work, because I do innovation, and it can't do that because it is not general intelligence.

posted by:   Kaj     13-Oct-2023/9:12:45-7:00



"we define intelligence as the abilities of the human brain"
    
"Generative AI cannot do my work, because I do innovation, and it can't do that because it is not general intelligence"
    
I've said it repeatedly: my point in this topic is not to argue the nature of intelligence - just to point out the value and usefulness of the tools I've been using and enjoying.
    
Innovation is not the only valuable work. Implementing well-understood automation has a tremendously beneficial effect on quality of life for my clients and for all the people it touches.
    
    
"I would rather not see my colleagues in my domain sell it to the world as something it's not"
    
I have nothing to sell, and I've only expressed how it's useful to me.

posted by:   Nick     14-Oct-2023/9:47:53-7:00



Kaj,
    
I've done innovative work at a very high level, as a human musician for more than 40 years:
    
https://www.youtube.com/watch?v=DnfH-4dCRiY
https://www.youtube.com/watch?v=qLhmAaYzfgs
    
Those videos are two casual, trivial creative moments from tens of thousands of moments in a long personal history of music making. That history includes more than 3000 professional performances, many with well known world class musicians - most of which could never have been produced by computers or AI, as it has existed historically. But I'm not so cocky that I don't expect AI will fully surpass the capabilities of my own human intelligence and experience, during the coming few years. Beyond that, I think it's ridiculously short sighted to not grok that humanity is just taking some first meaningful steps in the AI technology evolution. Those steps don't *currently encompass sentience - there's no need to argue that - but I fully expect sentience and true intelligence might emerge from the general directions currently being explored in neural networks, machine learning, etc. And I'm not alone in that expectation - just read any of the writings, interviews, papers, etc. by many luminaries in the field of AI research. That outlook, by professionals in that field, has changed dramatically in just just past 3-5 years.
    
If true intelligence doesn't evolve directly from current efforts, I believe the momentum of research is likely moving in the right direction (have you seen Microsoft's paper about Longview, to scale token size to 1 billion in the near future (that size is currently somewhere around 12000-16000 in GPT4): https://syncedreview.com/2023/07/10/microsofts-longnet-scales-transformer-to-one-billion-tokens/ https://winbuzzer.com/2023/07/07/microsoft-longnet-delivers-billion-token-transformers-pointing-to-potential-super-intelligence-xcxwbn/). Almost certainly, whatever intelligence is created will have different properties than human intelligence, and most likely it will be put to use in evil ways by humans, and perhaps that intelligence will end up be deadly to humanity on its own. That's why I wrote in the very first post "I'm equally encouraged and terrified of what is immediately around the corner for humanity...".
    
So, although I do understand and have put to use a variety of LLM technology, I'm not an AI research scientist, and I don't claim to know anything about where current technology will certainly lead. My point has only been that even if current LLMs just cleverly mimic some ability to produce reasoned output, I have found such output to be *tremendously *useful, even if all those tools can do is re-arrange existing knowledge. And although 'in-context learning' is different from human intelligence, the ability to consume an enormous amount of code, specification documentation, and other volumes of information, and describe any piece of the logic in that information, and provide reasoned code which interacts with and extends that existing code - that has been a tremendous time saver and a dramatic productivity boost in my work. That's simply the truth of my experience. There's no philosophical evaluation about the nature of intelligence in that truth.
    
The purpose of this post was to point out such useful capabilities of LLMs, to productively complete useful work. In the specific ways I've described, I've only described my experiences with the objective value of that technology, as it currently exists, in doing work that is beneficial to me and others whose work I affect. If you or anyone else is interested in the specifics of how I've used LLMs to complete tasks effectively, I'm happy to share more about that. If it's simply not useful to your innovative work, that's fine, you don't have to take part in this conversation, but I believe this topic is relevant to the content of this particular forum, because I can compare and contrast the outcome with those of my experiences with Rebol. In the end, Rebol was a tool for me to produce useful software for end-users. It was the most productive tool I could previously find to build the sorts of software I built - and I've been searching tools for more than 40 years. The current crop of tools I use are far more productive, capable and practical, for my needs. As much as it's important for you to set your innovative work apart from the sort of work I do, my work is tremendously beneficial, practical, and functional for the people and businesses I help.
    
Your interest in Rebol represented something deeper than mine. I was a *user of Rebol. Although I did understand and embrace the deep values and ethos which made Rebol productive and practically useful, your interest has been more about creating those tools, and how the value of eliminating complexity in programming language design, and even deeper engineering concerns, can have a tremendously beneficial effect on the entire field of computer science. My current personal outlook is that AI, machine learning, neural nets, etc. will likely have an even more dramatically powerful effect on the nature of computer science and the role of computers in future human existence. I think this is a great introduction to machine learning, for anyone interested in the topic:
    
https://www.youtube.com/watch?v=1vkb7BCMQd0

posted by:   Nick     14-Oct-2023/11:07:11-7:00



And just to be clear, I respect your work and I think your values are well placed and your work is valuable. I'm only speaking from the point of view of my own work and needs.
    
I currently need tools that enable deep connectivity with other systems. Python and JS enable connectivity with virtually any API, service, system, and 3rd party tool that I come in contact with.
    
I need to be able to integrate software painlessly with existing IT infrastructure. I can give my projects to IT professionals, and they're familiar with the Python ecosystem, package management and delivery mechanisms. And they are familiar with all the pieces and parts, as well as how to implement them in a secure environment, so that we can ensure Hipaa compliance, PCI compliance, etc. Those are massively important requirements for some clients, which I wouldn't even want approach without using well known mainstream tools.
    
I need to be able to do things like generate PDFs, connect and migrate data between 3rd party databases system (instantly, without any engineering work on my part), integrate 3rd party front end frameworks, 3D engines, etc. I need to be able to take code bases written by previous engineers, and integrate that old code into projects that use new delivery mechanisms (for example, implement code from old desktop apps into web apps).
    
I've done all those things in projects during the past month. And I've done it in ways that have been 10x faster and 10x cheaper than it would have been been for my clients to get done by developers using other tools. And I've loved doing every minute of that work because the tools I use have been so enjoyable to put to work. I think there are likely many developers, and likely many who were previously drawn to Rebol, who would be interested in having those sorts of capabilities, and who would like to enjoy their work the way I do. That's why I'm sharing my experience.

posted by:   Nick     14-Oct-2023/11:47:54-7:00



I can certainly understand why you must be frustrated hearing a fellow Reboler explain why he likes Python, JS and AI tools - that's about as far from the Rebol ethos as could be imagined. But, I loved Rebol (and still have a deep appreciation for its ethos), because it was a productive tool. I was more interested in the end result of that ethos. The end result of using the other tools I currently use is far greater productivity, enjoyment doing the work, reward, quality of life, etc. I know you hate the value system and engineering ethos that enables those tools, but I think they may be useful to others who used or were interested in Rebol, for the same reasons I was. I don't mean to offend your values - and I'll say it again, I think your values are genuinely meaningful - I just currently need productive tools to affect the world around me and my own life in as positive a way as I possibly can. I'm sharing my results. I understand if you disagree because they don't fit your purpose. I do enjoy feedback and thoughtful discussion. I just hope the purpose of sharing my experiences doesn't get lost in defending the value of what I have to share. ... Now I'm going to get back to work building really useful, powerful software that works for my clients :)

posted by:   Nick     14-Oct-2023/12:02:07-7:00



If during the next few years we start to see humanoid robots that, for example, enable humans to say 'clean up my room and these dishes, these plates go here, I like my coats to be hung up here...', or perhaps robots who can move a frail human in their bed, clean up bed pans, etc. - which is entirely within the realm of current neural network capabilities, then why is that such a 'false promise' (Chomsky) or necessarily the root of a cause that will 'degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge.'.
    
If I can save myself dozens of hours having GPT explain the purpose of hundreds of variables and how they're used in the logic of hundreds of functions within thousands of lines of an existing code base - or if I can save myself a few hours reading documentation to look up how to assemble arguments for a function call I need in some 1-off API I need to connect to to make a payment or pull data from a background check service - why must there be a need to demean the value of that capability just because it's not based on real 'intelligence'. Those sorts of capabilities have value and purpose and usefulness to us.

posted by:   Nick     14-Oct-2023/12:46:02-7:00



I don't consider the sorts of capabilities I've described, from my own work, as I've portrayed it here, to be 'unwarranted hype around AI'. I think those sorts of capabilities to be worthy of evaluation by anyone who does the sort of work I do. I think the sort of lofty evaluations of LLM 'intelligence' miss the point of just how useful and productive they can be in the kinds of work I find valuable in life.

posted by:   Nick     14-Oct-2023/13:03:17-7:00



BTW, although Noam Chomsky may be a cognitive scientist, that doesn't make him an expert in the field of neural network research. It only makes him an expert in comparing some qualities of human intelligence with some qualities of artificial intelligence which result from neural net research. And that is a separate subject from the topic that I hoped could be the point of this discussion: exploring some practical and useful capabilities of current artificial intelligence. Like I said, I appreciate the thoughtful discussion, food for thought, and your point of view.

posted by:   Nick     14-Oct-2023/13:39:23-7:00



Also, the current trend in neural network engineering is expanding more and more into multi-modal capabilities - those related to image, sound, video, tactile sensory perception, etc. - specifically encompassing characteristics of intelligence which are not related to spoken language, LLMs, etc. The idea of training neural networks and watching how they can devise solutions to problems - and then watching how they develop emergent capabilities that no one can currently explain (including some of the capabilities noted in the Microsoft research I linked earlier), is fascinating, and I believe points to coming evolution in that field. Although I'm only a novice in studying neural network and machine learning, I do regularly read and learn about all the technical mechanisms involved, as well as the technical improvements being made, and I'm basing my expectations about any potential future improvements upon seminars, papers, interviews, etc. by the people who have been most influential and successful at creating capable tools in that field.

posted by:   Nick     14-Oct-2023/14:07:46-7:00



https://www.quantamagazine.org/the-unpredictable-abilities-emerging-from-large-ai-models-20230316/
    
The article above about emergent capabilities mentions that current models 'are notorious liars. We’re increasingly relying on these models to do basic work [] but I do not just trust these. I check their work'.
    
I noted above above current LLMs: 'Of course it's not anywhere near perfect. I have seen GPT hallucinate (I never trust it to produce necessary data sets), but the overall increase in production, when working with tools and languages that it knows deeply (those with deep documentation and millions/billions of lines of code, tutorials, etc. available online), is staggering.'
    
There are interesting collections of emergent capabilities at these links:
    
https://www.jasonwei.net/blog/emergence
https://openreview.net/forum?id=yzkSU5zdwD
    
It's very interesting to watch research and development in this field this year!

posted by:   Nick     14-Oct-2023/16:54:05-7:00



You keep positioning AI as superseding REBOL, and by extension Meta. It is not. AI is a software technology that is written in programming languages, and is capable of manipulating other software that is written in programming languages. There is no reason that those programming languages could not be REBOL or Meta.
    
Have you read the blurb at the top of the Meta site about AI? There is a future for Meta in AI. And since Meta is so performant, there could even be a future for AI written in Meta.

posted by:   Kaj     14-Oct-2023/17:40:46-7:00



For my needs, the *productivity, enjoyment, and quality of life* I experience now, made possible by the tools I'm using currently using, for the sorts of work I do, have far exceeded any of the benefits Rebol offered me. That doesn't have to mean that one technology necessarily supersedes the other. I've never used Meta to complete a project, so I can't speak about it. I hope it's clear, I'm not making any comments whatsoever about Meta. I would love to discover that those goals of productivity, enjoyment, and quality of life are some day able to be exceeded even further by even more smartly engineered systems - perhaps Meta will achieve that - that would be fantastic :) BTW, Do you really think I could possibly not understand that AI systems are software technologies written using programming languages? That response does come across as a bit unnecessarily condescending.

posted by:   Nick     14-Oct-2023/18:43:04-7:00



The benefits that come from using technologies such as Python and JS haven't materialized because they are built on superior languages designs. The benefits come from their popularity. There are millions of people who've standardized solutions to common problems using those language platforms. Anvil has made using Python enjoyable. It's just a practical framework and environment that makes so many of the common development tasks so simple and easy to complete, that I can get the majority of most applications completed in a fraction of the time it takes with any other set of tools, in any environment I've ever tried, regardless of language. And then using libraries, connecting with other tools, using existing code (both Python and JS), meeting compliance requirements, achieving acceptance in IT environments, being able to work with others, delivering applications (and app updates) immediately for use on any common device/OS platform (Windows, Mac, Linux, iOS, Android, Chromebook), etc., is all just built-in. And an additional great benefit that I just keep running into recently, is that I can use AI to generate code, answer questions about tooling, about documentation, and about large pieces of original legacy code which would otherwise take many hours of research, study, and work. I can even use GPT to find errors, and to generate solutions for little needle in a haystack type problems - especially in someone else's existing code, which can take so much time and effort. That's a tremendous improvement in quality of life. And although I've hated working with JS, GPT has taken so much of that pain away, because it can generate code in ways which make me hate that environment far less. A human brain is still required to do all the thoughtful engineering work, specification definition, design, etc., but using the tools I've mentioned takes away so much of the time consuming grunt work, that there's much more time and energy left to focus on the problems which require the best of my efforts. If there were tools around which provided all those benefits to the Rebol language, I would have much preferred using Rebol. Perhaps as the tooling and ecosystem of Meta evolves, I will much prefer to use Meta. Right now, I'm using Anvil, Python, JS when needed, other database systems when needed, other UI frameworks when needed, and any other tooling when needed to complete work - and I use GPT to help choose tooling, to provide useful documentation, to quickly integrate third party code, etc. I've run into fewer roadblocks than I've experienced, in every project I've gotten involved with, using that stack - I'm enjoying myself in a way I never have before, and making lots of other people happy and more productive along the way :) I'd love to see a language such as Meta, and the tools/ecosystem which eventually evolves around it someday supersede all that.

posted by:   Nick     14-Oct-2023/19:49:34-7:00



This took a minute. I haven't checked the Meta code. I provided a zip file of the pages of Arnold's unofficial site as documentation for GPT to utilize (of course it can perform *much, *much more complicated work when working with Python, JS, etc.). I'm curious how it did with assimilating Arnold's docs, evaluating the code, and making a simple alteration?
    
    
    
User
Please explain the following meta code example:
    
(I PASTED ARNOLD'S PIG LATIN CGI APP HERE)
    
    
ChatGPT
The provided Meta code is a CGI script for a web-based Pig Latin generator. Here's a step-by-step breakdown of the code:
    
Metadata:
The script starts with a block of metadata, which is common in REBOL and Meta scripts:
    
meta
Copy code
Meta [
     Title: "Piglatin CGI script with form"
     ...
     Purpose: {
         Create a piglatin FORM for Meta CGI usage.
     }
]
This block provides metadata about the script, such as its title, authors, copyright details, and its main purpose.
    
Reading User Input:
Next, the script sets up to read user input, either from a POST request or a GET request:
    
meta
Copy code
; Write header information
write/line {Content-type: text/html}
    
; Read from STDIN, this is for POST
data-in: read/line
Here, the write/line function sends an HTTP header indicating that the output will be HTML content. The read/line function reads from STDIN (Standard Input), which, in a CGI context, is typically the data from a POST request.
    
Processing POST Data:
The script then processes the POST data, if any:
    
meta
Copy code
MAX-CONTENT-LENGTH= 80
text: select/case system/environment "CONTENT_LENGTH"
...
The script checks the content length of the POST data and ensures it doesn't exceed a maximum length (MAX-CONTENT-LENGTH). The POST data is then parsed to extract the actual content.
    
Processing GET Data:
If the script doesn't receive POST data, it checks for GET data:
    
meta
Copy code
; Read a QUERY string, this is for GET!!
text: find/tail select/case system/environment "QUERY_STRING" "="
This segment of the code reads the query string from the environment, which is typically the data from a GET request.
    
Converting Text to Pig Latin:
After obtaining the input text, the script then converts it to Pig Latin:
    
meta
Copy code
; Prepare string for the output
piglatin: ""
    
if text [
     ; convert text into piglatin
     ...
]
The main logic for the Pig Latin conversion is contained in this block. The script processes each word in the input text, determines if the word starts with a vowel or a consonant, and then applies the appropriate Pig Latin transformation.
    
Generating the Web Page:
Finally, the script generates an HTML web page that includes a form for user input and displays the Pig Latin conversion:
    
meta
Copy code
; Write the form
    
write {
...
}
This part of the script constructs the web page using the write function. The page includes:
    
A form that prompts the user to enter text.
A display area that shows the converted Pig Latin text.
Styling for the web page, such as colors and layout.
In summary, this Meta script is a web-based application that allows users to input text, converts the text to Pig Latin, and then displays the Pig Latin version of the text. The script handles both GET and POST requests and employs a basic Pig Latin conversion algorithm.
    
    
    
    
User
Please explain the code block which performs the pig Latin conversion
ChatGPT
Certainly! The Pig Latin conversion is a fun linguistic game where words in English are altered. The general rule for converting words into Pig Latin is:
    
For words that begin with consonant sounds, the initial consonant or consonant cluster is moved to the end of the word and "ay" is added. For instance, "hello" becomes "ellohay" and "smile" becomes "ilesmay".
For words that begin with vowel sounds or silent letters, one just adds "way" or "yay" to the end. For example, "egg" becomes "eggway" or "eggyay", and "inbox" becomes "inboxyay".
Let's break down the code block that performs the Pig Latin conversion:
    
meta
Copy code
if text [
     ; convert text into piglatin
     while all [text not is-tail text] [
         plc: 0 pec: 0
         if plus-word: find text "+" [plc: count plus-word]
         if perc-word: find text "%" [pec: count perc-word]
         any [if all [plc = 0 pec = 0][look-for: "+" skip-count: 1]
             if any [pec = 0 plc > pec][look-for: "+" skip-count: 1]
             if any [plc = 0 plc < pec][look-for: "%" skip-count: 3]
         ]
         text: if space: find word: text look-for [
             word: copy cut text (count text) - (count space)
             skip space skip-count
         ]
         unless is-empty word [
             string-input: join/with string-input word
             vowel: none
             character: word
    
             until any [
                 is-tail character
                 if find "aeiou" copy cut character 1 [vowel: character]
             ][
                 advance character
             ]
             either is-same word vowel [ ; Word starts with vowel
                 piglatin: join/with join/with piglatin word
                 "yay" ; way, hay
             ][
                 either vowel [
                     piglatin: join/with join/with piglatin vowel
                     copy cut word (count word) - (count vowel)
                 ][
                     piglatin: join/with piglatin word
                 ]
                 piglatin: join/with piglatin "ay"
             ]
         ]
         piglatin: join/with piglatin " "
         string-input: join/with string-input " "
     ]
]
Here's the step-by-step explanation of the above code:
    
Initialize Loop: The code begins by checking if there is text to convert. If there's text, it enters a while loop that continues until all the text has been processed.
    
Identifying Word Boundaries: The loop starts by identifying boundaries between words. It looks for "+" and "%" characters, which are likely used to represent spaces and other special characters (e.g., due to URL encoding of the input).
    
Extracting Individual Words: The script then extracts individual words from the input text, delimited by the boundaries identified in the previous step.
    
Identifying Vowels: For each word, the script identifies the position of the first vowel. This is important because the Pig Latin transformation depends on whether a word starts with a vowel or a consonant.
    
Pig Latin Transformation:
    
If the word starts with a vowel, the script appends "yay" to the end.
If the word starts with a consonant, the initial consonant or consonant cluster is moved to the end of the word, and "ay" is appended.
Combining Results: The transformed word is then added to the piglatin variable, which accumulates the Pig Latin translation of the entire input text.
    
By the end of the loop, the piglatin variable contains the Pig Latin version of the input text.
    
This code provides a basic implementation of Pig Latin conversion, and there are many variations of the rules for Pig Latin. The code might need modifications to handle edge cases or different variations of Pig Latin.
    
    
    
    
User
Please adjust the original pig Latin script so that it adds the text 'blurg' to the end of each generated pig Latin word
ChatGPT
To adjust the original Pig Latin script to append "blurg" to the end of each generated Pig Latin word, we need to modify the sections of the script where the transformed word is appended to the piglatin variable.
    
Let's make the necessary adjustments:
    
meta
Copy code
...
; Prepare string for the output
piglatin: ""
    
if text [
     ; convert text into piglatin
     while all [text not is-tail text] [
         ...
         unless is-empty word [
             string-input: join/with string-input word
             vowel: none
             character: word
    
             until any [
                 is-tail character
                 if find "aeiou" copy cut character 1 [vowel: character]
             ][
                 advance character
             ]
             either is-same word vowel [ ; Word starts with vowel
                 piglatin: join/with join/with piglatin word
                 "yayblurg" ; added "blurg"
             ][
                 either vowel [
                     piglatin: join/with join/with piglatin vowel
                     copy cut word (count word) - (count vowel)
                 ][
                     piglatin: join/with piglatin word
                 ]
                 piglatin: join/with piglatin "ayblurg" ; added "blurg"
             ]
         ]
         piglatin: join/with piglatin " "
         string-input: join/with string-input " "
     ]
]
...
Key Changes Made:
    
In the section where a word starts with a vowel and "yay" is appended, I changed "yay" to "yayblurg".
In the section where a word starts with a consonant and "ay" is appended, I changed "ay" to "ayblurg".
With these adjustments, the script will now append "blurg" to the end of each generated Pig Latin word.
    
    
    
    
How did it do?

posted by:   Nick     14-Oct-2023/21:24:32-7:00



Of course, I expect it typically does a far better job when working with APIs, code examples, etc. that are written using languages, frameworks, environments, etc., which it has been trained on deeply.

posted by:   Nick     14-Oct-2023/21:28:41-7:00



Pasting the Meta CGI code here on the forum would result in the the browser displaying some of the HTML form code in the original Meta code, so I just replaced it with a placeholder, as it doesn't affect GPT's output.

posted by:   Nick     14-Oct-2023/21:47:33-7:00



I'm not sure how GPT performed with this task looking at Meta code, but perhaps this can begin to provide some initial inkling of how it can be useful in other cases.
    
This week, I had to integrate several thousand lines of code from a desktop app written in Python, made to generate Word documents from form inputs previously collected and processed via Google Forms. The job was to recreate the user interface as a web app they could run in house, integrate the legacy Word document creation code into the web app, and make dozens of alterations to the layout and content of the generated Word docs.
    
That's a super straightforward task, but the original forms were 8-9 pages long, each full of many form inputs, so it involved many hundreds of variables representing each response to questions in the original Google forms, many hundreds of conditional rules to parse the form values and produce appropriate content in the generated Word documents, each with associated CSV parsing operations (lists of values were contained in each of dozens of CSV fields, with different conditional parsing operations, based upon the configuration and values in each list), and a Python library which I had never used before was implemented to produce the Word document content.
    
This was the simplest challenge of all the projects I completed during the past month, but it required a large volume of repetitive, detailed grunt work. GPT saved me an enormous amount of time and frustration getting that grunt work done. I uploaded the full legacy code base and simply asked which variables needed to be changed, and which/how each line of code needed to edited for every change to text and formatting rules in the output document. This not only saved hours tracing paths through function calls and their inconsistently named arguments in the legacy code, but also saved hours of boring mindless work looking up function call signatures in the documentation of the library that was used to generate each piece of the Word document (tables, bullets, headers with colors based on conditional content evaluations, lists of grouped data values within text, etc.). In most cases, GPT not only helped by instantly identifying variables which would have required painstaking grunt work following the path of each of hundreds of values through many functions, each with different and inconsistent variable naming conventions - it also instantly generated the code needed to produce each conditional change in the output document, for free.
    
Increasing the speed of this grunt work freed my time to work on much more complicated engineering challenges in other projects that I was working on simultaneously with other clients. The improved response times also made the clients really happy, as I was able to handle dozens of tickets at a time to perform document updates, typically within an hour, and almost always the same day, when previously those sorts of tickets took the previous developer days or weeks to complete. And of course, Anvil made this whole process much faster because users could instantly see and interact with each new version of the app instantly, and Anvil's handling of development/production versioning, syntax highlighting in the IDE, etc., made project management, repetitive coding chores, error checking, etc., super simple and fast to complete.
    
In this little project, there were many other small tasks, such as removing legacy Tkinter desktop UI code, and converting data saved to files in the legacy code, to in-memory Anvil media objects, and lots of detailed little related data transformations and paths through function calls, logic, etc., for which GPT instantly generated working code to ease what would have taken even more painstaking grunt work - all for free. It's ability to integrate code from multiple previously existing examples, into one final block of working code, is really impressive and productive.
    
Working with unknown libraries and APIs are the sort of thing which used to require lots of Googling, reading documentation, and looking up questions on StackOverflow, etc. And it used to take an enormous amount of painstaking detailed grunt work to complete this sort of work. GPT and other LLMs never get tired and they never complain :) This sort of work can now be very much automated with LLMs, and the quality of generated output is staggeringly good, when working with well known libraries. I'm consistently impressed with how GPT can generate code examples from documentation - even when there are no examples in the documentation (this is in place learning) - and then integrate each of those pieces into a larger context, based on an explanation of the reasoning required. It's honestly often astounding how well it does, compared to how well a human would do on a first pass reasoning through the requirements - and it can output hundreds of working lines of code instantly, when that work would have taken lots of time, regardless of how simple or complex the task.
    
This not only improves my productivity, but my whole outlook and state of mind when approaching projects with lots of time consuming detail work.
    
But that's just the tip of the iceberg. I think one of the most fantastic capabilities of GPT is it's ability to interact with questions about reasoning required to complete a task. Using GPT's code interpreter, you can provide instructions for a task to be completed, and it will perform multiple iterations of code generation, based on reasoned evaluations of whether the code actually works to produce the correct outcome. It will change it's approach and devise new thoughtful approaches based upon how the output does not conform to the requested outcome. It will display the code used to generate each step along the way, and it will explain the thought process chosen to discover a solution.

posted by:   Nick     15-Oct-2023/10:10:30-7:00



Here's an example from a previous project in which I was provided a PDF of tables scanned from a text book. I needed to convert this tabular data, which was only available across dozens of pages of text from a printed paper book, to a data structure that could be processed in the app. There were other tables and lots of descriptive text on many of the pages of the book, along with the tabular data I needed to extract. This is the *iterative process the code interpreter devised:
    
    
    
ADEM score sheet (1).pdf
PDF
Please OCR the image in this PDF file. I want to convert the 'ADEM scores' chart into a python dictionary. For each line in the chart, the value in the 'age range (years)' column should be a key in the dictionary. The value for that key should be dictionary in which each key is the column header ('vertical mean', 'vertical +- sd', etc.), and each value is the value in that row/column cell.
ChatGPT
Alright, I understand your request. Here are the steps I'll take to fulfill your task:
    
Extract the image from the PDF.
Perform OCR (Optical Character Recognition) on the image to extract the text and structure.
Convert the data into a dictionary format as requested.
Let's start with the first step: extracting the image from the PDF.
    
Finished working
Show work
I've successfully converted the PDF to an image. Now I'll perform OCR on the image to extract the text and structure. I'll use the pytesseract library for this.
    
Finished working
Show work
The OCR output contains a lot of unnecessary information, which is likely due to the complexity of the PDF layout. However, the last few lines appear to contain the 'ADEM scores'. Let's isolate and process those lines.
    
To do this, I will:
    
Split the text into lines.
Identify the section with the 'ADEM scores'.
Split each line in that section into its individual parts.
Construct the requested dictionary.
Finished working
Show work
It seems that the regex pattern didn't match any line in the extracted text. This suggests that the OCR extraction may not have accurately captured the tabular data from the image, or that the pattern didn't fit the actual format of the data.
    
Let's take a different approach. I'll try to extract the table directly from the PDF using a library called Tabula. Tabula is specifically designed to extract tables from PDFs, and it might be better suited for this task.
    
Finished working
Show work
I apologize for the misunderstanding. The Python environment in this interface doesn't support the Tabula package, which is a Java-based library for extracting tables from PDFs.
    
We are left with the OCR approach. Let's try to refine the OCR process by improving the image quality before extracting the text. This can often help improve the OCR results.
    
We can try enhancing the image quality by binarizing it (converting it to black and white), which can sometimes improve OCR results. Let's proceed with this.
    
Finished working
Show work
The OCR result seems to be clearer than before, but it still contains a lot of noise, likely due to the complexity of the layout in the original document.
    
Let's try to isolate and process the ADEM scores as we did before.
    
Finished working
Show work
I will now process this data and convert it into the required dictionary format. I'll interpret the data in the following way:
    
The first line of each group of 6 lines represents the 'age range (years)'.
The following 5 lines in each group represent the 'vertical mean', 'vertical +- sd', 'horizontal mean', 'horizontal +- sd', 'ratio mean', and 'ratio +- sd' respectively.
Let's proceed with creating the Python dictionary.
    
Finished working
Show work
My apologies for the confusion. It appears that I accidentally used the variable text which contains the OCR text, instead of the manually provided data. Let me correct this. I'll first format your provided data into a list of lines that can be processed.
    
Finished working
Show work
It seems like there are cases where two values are on the same line, separated by a space. I initially assumed each value would be on its own line based on the initial part of the data.
    
To resolve this issue, I'll modify the approach to handle cases where two values are present on the same line.
    
Let's adjust the process accordingly.
    
Finished working
Show work
Apologies for the misunderstanding. I see that the data format isn't consistent. Sometimes there's one value per line, and sometimes there are two. Let's modify the approach to handle both cases.
    
Since there's a clear pattern in the dataset (6 lines representing one data entry), I'll first reformat the data into six equal-length lists. Then I'll construct the dictionary accordingly.
    
Finished working
Show work
I see there's a mistake in the logic I used. Some of the vertical mean and vertical standard deviation lines contain two numbers. I'll adjust the logic to handle these cases.
    
Let's correct this.
    
Finished working
Show work
My apologies for the confusion. It seems I made a mistake in interpreting the data structure. Let's go through the data structure again:
    
Lines 1, 7, 13, 19, etc. (1 + 6n) represent the 'age range (years)'.
Lines 2, 8, 14, 20, etc. (2 + 6n) represent the 'vertical mean'.
Lines 3, 9, 15, 21, etc. (3 + 6n) represent the 'vertical +- sd'.
Lines 4, 10, 16, 22, etc. (4 + 6n) represent the 'horizontal mean'.
Lines 5, 11, 17, 23, etc. (5 + 6n) represent the 'horizontal +- sd'.
Lines 6, 12, 18, 24, etc. (6 + 6n) represent the 'ratio mean' and 'ratio +- sd'.
Now let's correct the code to reflect this structure.
    
Finished working
Show work
I apologize for the confusion. It seems that I made a mistake when handling lines with two numbers. For the 'vertical mean' and 'vertical +- sd', there should only be one number per line, so we should not encounter lines with two numbers.
    
It seems that the issue arises because the age ranges are repeated in the data. The 'vertical mean' for the first age range (14-18) is '50.50', but the next number in the sequence is '50.50' again, which is the 'vertical mean' for the second instance of the same age range.
    
We need to handle this repetition in the data. Let's adjust the code accordingly.
    
Finished working
Show work
I apologize for the confusion. It seems that I've misunderstood the structure of the data. The correct structure appears to be:
    
Lines 1, 7, 13, 19, etc. (1 + 6n) represent the 'age range (years)'.
Lines 2, 8, 14, 20, etc. (2 + 6n) represent the 'vertical mean'.
Lines 3, 9, 15, 21, etc. (3 + 6n) represent the 'vertical +- sd'.
Lines 4, 10, 16, 22, etc. (4 + 6n) represent the 'horizontal mean'.
Lines 5, 11, 17, 23, etc. (5 + 6n) represent the 'horizontal +- sd'.
Lines 6, 12, 18, 24, etc. (6 + 6n) represent the 'ratio mean' and 'ratio +- sd'.
When I tried to split the 'vertical mean' data into two lists (since each age range appears twice), I mistakenly included the 'ratio mean' and 'ratio +- sd' data as well. This caused a ValueError because the 'ratio mean' and 'ratio +- sd' data contains two values per line, separated by a space.
    
Let's correct the code to reflect the proper structure.
    
Finished working
Show work
I apologize for the confusion. There seems to be a mistake in my approach. For some rows, the 'ratio mean' and 'ratio +- sd' values are on separate lines, while for others, they are on the same line.
    
To handle this, I'll adjust the code to account for the different formats. I'll also change the structure to ensure that for each age range, we have a list of dictionaries instead of a dictionary of lists. This will make it easier to handle potential differences in the number of data points per age range. Let's adjust the process accordingly.
    
Finished working
Show work
I apologize for the confusion. It seems I made a mistake when interpreting the structure of the data. The 'ratio mean' and 'ratio +- sd' are not on separate lines; they are always on the same line, but sometimes the 'ratio +- sd' is missing.
    
To handle this, I'll adjust the code to check if the 'ratio +- sd' exists. If it does, I'll include it; otherwise, I'll assign it a value of None. This should correctly handle the different formats. Let's adjust the process accordingly.
    
Finished working
Show work
    
    
    
User
can I please have the final output file?
ChatGPT
Finished working
Show work
You can download the file using the following link:
    
Download adem_scores.py


posted by:   Nick     15-Oct-2023/10:19:11-7:00



That was all automatic. What's really impressive is how you can interact with that process, explain how you'd like output transformed, or explain any additional logic you'd like inserted. The complexity of reasoning and logic it can handle is really great, and the code interpreter's ability to evaluate its own work and devise new creative solutions based on it's knowledge of *all the tools available in an ecosystem (more knowledge than any single person or group of people could hold in their brain(s)) - and then it's ability to integrate solutions based upon documentation it hasn't been previously been trained on (documentation which you provide in line during interactive sessions) - all that leads to an incredibly useful tool.
    
To be clear, I don't think this points to any sort of sentience - and of course, the whole process requires human involvement at many levels, including even the most basic steps of communicating with clients, devising requirements, and setting specifications needed to build an app.
    
But as a tool, LLMs can be tremendously useful in reducing work load and leaving brain cells fresh to perform more challenging engineering tasks - and possibly even to help complete innovative work ;)

posted by:   Nick     15-Oct-2023/10:32:16-7:00



BTW, here's some of the process GPT went through to evaluate Arnold's Meta documentation:
    
From the uploaded documentation (downloaded from https://arnoldvanhofwegen.com/the/unofficial/meta/dictionary.html), can you please explain how to build a web app which accepts a person's name, address, and phone, and saves each entry as a line in a CSV file?
ChatGPT
Let's first correctly parse the entire uploaded documentation. Since we've established that the content might not be in the main index.html, I'll explore the other extracted files and directories to locate relevant documentation that can guide us in building a web app with the specified requirements in the Meta language.
    
Finished working
Show work
It appears there's a subdirectory named "the" inside the arnoldvanhofwegen.com directory. I'll further explore the "the" directory to locate relevant documentation.
    
Finished working
Show work
There's a further subdirectory named "unofficial" inside the "the" directory. I'll continue the exploration by diving into the "unofficial" directory, hoping to find relevant content about the Meta language and building web apps.
    
Finished working
Show work
The "unofficial" directory contains subdirectories named "images" and "meta." The "images" directory is likely to contain graphical assets, so our interest lies in the "meta" directory, which might contain the relevant documentation for the Meta language. Let's proceed to explore the "meta" directory.
    
Finished working
Show work
The "meta" directory contains multiple HTML files that might have relevant documentation:
    
cgiprogramming.html
dictionary.html
examples.html
guide.html
Given that our goal is to build a web app that interacts with user input and saves data to a CSV file, the cgiprogramming.html file seems like a promising starting point, as it suggests content related to web programming with CGI (Common Gateway Interface). Let's start by parsing the cgiprogramming.html file to extract relevant details.
    
Finished working
Show work
The cgiprogramming.html file contains information about CGI programming with Meta. Here's a summarized interpretation of the content:
    
CGI Programming with Meta
    
Why CGI Programming?: CGI programming in some languages, like C, can be challenging, and is often discouraged. Using interpreted languages, such as Perl, for CGI requires running an interpreter, which can be a burden on the server. In contrast, a pre-compiled script, like those in C, doesn't have this disadvantage. However, the drawback with C is that it can be difficult to get things right.
    
Advantages of Using Meta for CGI: Meta offers a solution that combines the best of both worlds. With Meta, you can create pre-compiled CGI scripts that are easier to write and understand. Meta programs are human-readable, making it easier to spot errors before deploying the program on the web.
    
CGI for Database Connectivity: At the current stage of development, CGI programming is the only method available for connecting to databases like MySQL or MariaDB using Meta. However, as the language develops, other options may become available.
    
While this content provides an overview of CGI programming with Meta, it doesn't provide a specific guide on building a web app to accept user inputs and save them to a CSV file. We might need to explore the other files, such as examples.html or guide.html, to find more detailed instructions.
    
Would you like me to continue exploring these files for relevant content?
    
    
    
    
User
yes, please continue
ChatGPT
I'll proceed by exploring the examples.html file next. This file seems promising as it might contain practical examples of web app development using Meta, which could guide us in building the desired application.
    
Finished working
Show work
Based on the content extracted from the examples.html file, we have some examples that provide guidance on various aspects of using Meta:
    
How to Compile a Program Quickly: This section provides a shell script example for Linux that helps in compiling a Meta program. The script compiles, copies, runs the program, and returns to the directory where the run program is located.
    
How to Write a File in Meta: This section begins with an example of how to write to a file using Meta. The content cuts off after the initial setup, but it suggests that there will be details on file operations in Meta.
    
Given the focus on file operations in the provided content, the information about writing to a CSV file might be present in the continuation of the "How can you write a file in Meta?" section.
    
I'll further explore the content to see if there are examples or details on accepting user input and writing to a CSV file.
    
Finished working
Show work
Based on the extracted content from the examples.html file, we found a segment related to file operations in Meta. However, it seems to be an example related to another problem, not specifically about accepting user input for a person's details and saving it to a CSV file.
    
Considering the breadth and depth of the documentation, it might be more practical to directly explore the guide.html file next. This file could contain a more comprehensive guide or instructions on building web apps with Meta. Let's proceed to parse the guide.html file.
    
Finished working
Show work
The extracted content from the guide.html file provides insights into programming in Meta, emphasizing the importance of developing an improved programming style for Meta and ensuring efficient code generation.
    
However, the displayed segment does not yet provide specific instructions for building a web app to accept user inputs and save them to a CSV file.
    
Considering the length and breadth of the documentation, I'll continue to parse the guide.html file to search for elements or sections related to web app development, input fields, and CSV file operations.
    
Finished working
Show work
The content from the guide.html file provides some insights into file operations in Meta:
    
There's mention of handling files and the suggestion to "look at the example section to find an elaborate example of handling files."
There are compilation logs, warnings, and errors related to file operations and type mismatches.
However, the extracted content doesn't provide a specific guide or example on building a web app to accept user inputs and save them to a CSV file.
    
Given the available content, our next step would be to explore the remaining file, dictionary.html, which might provide a dictionary or reference of Meta functions and operations. This file could potentially contain details on file operations, accepting user input, and working with CSV files.
    
Would you like me to continue exploring the dictionary.html file?
    
    
    
    
User
yes, please continue
ChatGPT
Alright, I'll proceed by parsing the dictionary.html file. This file might provide a dictionary or reference of Meta functions and operations, which could guide us in building the desired web app.
    
Finished working
Show work
Based on the extracted content from the dictionary.html file, we have a list of functions and operations in Meta, with brief descriptions:
    
ABSOLUTE: Gives back the absolute value of a NUMBER! type value.
ADD: Adds two numbers.
ADVANCE: Advances a STRING! variable one position.
ALL: The Meta way of using a conjunction, equivalent to the Boolean AND operator in other languages.
ALSO: A convenience function to save data without creating a temporary variable.
AND: Bitwise AND operator.
ANY: The Meta way of using a disjunction, equivalent to the Boolean OR operator in other languages.
ANY-INTEGER!: Synonym for all INTEGER! types.
APPEND: Appends data to a FILE! to write to a file.
ARRAY!: Qualifies the ARRAY type in Meta.
AS: Describes how the data should be interpreted.
ASK: Gets input from a user.
From the provided segment, the APPEND function seems relevant for writing to a file. The ASK function can potentially be used for getting input from a user.
    
Let's continue exploring the content to identify functions or operations that can help in building a web app that accepts a person's name, address, phone, and saves each entry to a CSV file.
    
Finished working
Show work
From the dictionary.html file, we identified several relevant functions and datatypes related to file operations:
    
APPEND: Appends data to a FILE! to write to a file.
CLOSE: Closes a file handle.
FILE!: Represents the file datatype in Meta.
OPEN: Opens a file handle for the program.
PORT!: Represents a computer port used for file IO and other communication.
While these functions provide a foundation for working with files in Meta, the content does not yet provide specific instructions or examples for building a web app to accept user inputs and save them to a CSV file.
    
The next step would be to explore if there's a way to accept input in Meta using the ASK function (from the previous file) and then utilize the file-related functions to save the input to a CSV file.
    
Would you like me to proceed by synthesizing an example based on the functions we identified, or would you prefer another approach?
    
...
    
    
Please ensure you refer to official Meta documentation or resources to verify the correctness and to make necessary adjustments.

posted by:   Nick     15-Oct-2023/10:38:09-7:00



I'm learning to use the payarc api. Can you explain how data is stored in their customer vault during transactions?
    
ChatGPT
I'm sorry, but as of my last knowledge update in September 2021, I am not aware of a specific API or service called "Payarc" and its details regarding how data is stored in their customer vault during transactions. It's possible that Payarc is a relatively new service or that it operates in a niche market, and therefore, information about it may not be widely available.
    
To get detailed information about how data is stored in Payarc's customer vault during transactions, I recommend visiting Payarc's official website or developer documentation. You should be able to find comprehensive information about their API, data storage practices, and transaction processes there. Additionally, you may want to reach out to their support or technical team for any specific questions or clarifications you need about their API and data handling procedures.
    
    
    
Methodology
Payment API
Transactions
    
Steps:
1. The customer sends their payment information to the merchant's web site.
2. The merchant web site posts the payment data to the Payment Gateway.
3. The Payment Gateway responds immediately with the results of the transactions.
4. The merchant web site displays the appropriate message to the customer.
    
The communication method used to send messages to the Payment Gateway's server is the standard HTTP protocol over an SSL connection.
    
In the Payment API method, the communications with the cardholder (Steps 1 and 4) are developed completely by the merchant and therefore are not defined by the Payment Gateway. Step 1 should simply collect the payment data from the cardholder and Step 4 should display the appropriate transaction receipt or declined message.
    
In Step 2, transaction details should be delivered to the Payment Gateway using the POST method with the appropriate variables defined below posted along with the request.
    
In Step 3, the transaction responses are returned in the body of the HTTP response in a query string name/value format delimited by ampersands. For example: variable1=value1&variable2=value2&variable3=value3
    
Customer Vault
The Customer Vault was designed specifically for businesses of any size to address concerns about handling customer payment information. Visa and MasterCard have instituted the Payment Card Industry (PCI) Data Security to protect cardholder data, wherever it resides, ensuring that members, merchants, and service providers maintain the highest information security standards.
    
These associations have also deemed that merchants will be held liable for any breach of cardholder data. This has become a major concern for merchants who handle credit card or electronic check payments. The Customer Vault is designed for these merchants who desire to avoid the tremendous costs and resources involved in becoming PCI compliant under these circumstances.
    
The Customer Vault does this by allowing merchants to transmit their payment information through a Secure Sockets Layer (SSL) connection for storage in our Level 1 PCI certified data facility. Once the customer record has been securely transmitted to the Customer Vault, the merchant can then initiate transactions remotely without having to access cardholder information directly. This process is accomplished without the merchant storing the customer's payment information in their local database or payment application.
ChatGPT
Thank you for providing information about the Payarc methodology and Customer Vault. Based on the information you've shared, here's a summary of how transactions are handled in Payarc:
    
Payment API Methodology:
    
Customer Payment Information Collection (Step 1): The customer provides their payment information to the merchant's website. This step is developed by the merchant and is not defined by the Payment Gateway. It involves collecting payment data from the cardholder.
    
Posting Payment Data to Payment Gateway (Step 2): The merchant's website posts the payment data to the Payment Gateway. This is done using the standard HTTP protocol over an SSL (Secure Sockets Layer) connection. The transaction details should be delivered to the Payment Gateway using the POST method with specific variables posted along with the request.
    
Transaction Response (Step 3): The Payment Gateway responds immediately with the results of the transaction. The transaction responses are returned in the body of the HTTP response in a query string name/value format delimited by ampersands (e.g., variable1=value1&variable2=value2&variable3=value3).
    
Displaying Transaction Messages (Step 4): The merchant's website displays the appropriate message to the customer based on the response received from the Payment Gateway. This can include transaction receipts or declined messages.
    
Customer Vault:
    
The Customer Vault is designed to address concerns about handling customer payment information, particularly regarding Payment Card Industry (PCI) Data Security standards enforced by Visa and MasterCard.
    
PCI standards require businesses to protect cardholder data, and merchants can be held liable for any breaches. The Customer Vault helps merchants avoid the costs and resources associated with becoming PCI compliant.
    
Payment information is transmitted to the Customer Vault through an SSL connection and is stored in a Level 1 PCI certified data facility operated by Payarc.
    
Once the customer's payment information is securely stored in the Customer Vault, merchants can initiate transactions remotely without having to directly access cardholder information. This process ensures that merchants do not store the customer's payment information in their local database or payment application.
    
In summary, Payarc's methodology involves secure transmission and handling of payment data between the customer, the merchant, and the Payment Gateway. The Customer Vault is designed to help merchants comply with PCI standards while securely storing and managing customer payment information. It allows for remote transaction initiation without the need for local storage of sensitive cardholder data.
    
    
    
    
User
Payment API
Sale (sale)
Transaction sales are submitted and immediately flagged for settlement.
    
Authorization (auth)
Transaction authorizations are authorized immediately but are not flagged for settlement. These transactions must be flagged for settlement using the capture transaction type.
    
Capture (capture)
Transaction captures flag existing authorizations for settlement. Only authorizations can be captured. Captures can be submitted for an amount equal to or less than the original authorization.
    
Void (void)
Transaction voids will cancel an existing sale or captured authorization. In addition, non-captured authorizations can be voided to prevent any future capture. Voids can only occur if the transaction has not been settled.
    
Refund (refund)
Transaction refunds will reverse a previously settled or pending settlement transaction. If the transaction has not been settled, a transaction void can also reverse it.
    
Credit (credit)
Transaction credits apply an amount to the cardholder's card that was not originally processed through the Gateway. In most situations credits are disabled as transaction refunds should be used instead.
    
Validate (validate)
This action is used for doing an "Account Verification" on the cardholder's credit card without actually doing an authorization.
    
Update (update)
Transaction updates can be used to update previous transactions with specific order information, such as a tracking number and shipping carrier.
ChatGPT
Thank you for providing an overview of the different transaction types available in the Payarc Payment API. These transaction types serve various purposes in processing payments and managing transactions. Here's a brief explanation of each transaction type:
    
Sale (sale): Sales transactions are submitted and immediately flagged for settlement. This means that the transaction is both authorized and settled in one step.
    
Authorization (auth): Authorization transactions are authorized immediately but are not flagged for settlement. These transactions must be explicitly flagged for settlement using the capture transaction type. Authorization is used to check if funds are available on a card but doesn't initiate the actual transfer of funds.
    
Capture (capture): Capture transactions are used to flag existing authorizations for settlement. Only authorizations can be captured. Captures can be submitted for an amount equal to or less than the original authorization. It essentially completes the payment process initiated by an authorization.
    
Void (void): Void transactions cancel an existing sale or captured authorization. Additionally, non-captured authorizations can be voided to prevent any future capture. Voids can only occur if the transaction has not been settled.
    
Refund (refund): Refund transactions reverse a previously settled or pending settlement transaction. If the transaction has not been settled, a transaction void can also be used to reverse it. Refunds are typically used for returning funds to the cardholder.
    
Credit (credit): Credit transactions apply an amount to the cardholder's card that was not originally processed through the Gateway. In most cases, credits are disabled, and transaction refunds are recommended for refunding money to the cardholder.
    
Validate (validate): The validate action is used for performing an "Account Verification" on the cardholder's credit card without actually authorizing a payment. It's often used to verify the validity of a card without initiating a transaction.
    
Update (update): Transaction updates are used to update previous transactions with specific order information, such as a tracking number and shipping carrier. This allows merchants to provide customers with up-to-date information about their orders.
    
These transaction types provide flexibility for processing payments and managing transactions, allowing merchants to perform various actions based on their specific needs and business processes. It's important for developers and merchants to understand when and how to use each transaction type to ensure smooth payment processing and customer satisfaction.
    
    
    
    
User
Transaction Variables
Payment API
POST URL
POST URL:    https://payarc.transactiongateway.com/api/transact.php
Sale/Authorization/Credit/Validate/Offline
Variable Name    Description
type*    The type of transaction to be processed.
Values: 'sale', 'auth', 'credit', 'validate', or 'offline'
security_key*    API Security Key assigned to a merchant account.
New keys can be generated from the merchant control panel in Settings > Security Keys
payment_token    The tokenized version of the customer's card or check information. This will be generated by Collect.js and is usable only once.
transaction_session_id‡‡‡‡    A single use session ID used by Kount to link the transaction and Data Collector information together. This ID should be generated every time a payment form is loaded by the cardholder, and be random/unpredictable (do not use sequential IDs). This ID should not be reused within a 30 day period. This can be used with Collect.js or the Payment API when using the Kount DDC with Gateway.js.
Format: alphanumeric, 32 characters required
googlepay_payment_data    The encrypted token created when integration directly to the Google Pay SDK.
ccnumber**    Credit card number.
ccexp**    Credit card expiration date.
Format: MMYY
cvv    The card security code. While this is not required, it is strongly recommended.
checkname***    The name on the customer's ACH account.
checkaba***    The customer's bank routing number.
checkaccount***    The customer's bank account number.
account_holder_type    The type of ACH account the customer has.
Values: 'business' or 'personal'
account_type    The ACH account entity of the customer.
Values: 'checking' or 'savings'
sec_code    The Standard Entry Class code of the ACH transaction.
Values: 'PPD', 'WEB', 'TEL', or 'CCD'
amount    Total amount to be charged. For validate, the amount must be omitted or set to 0.00.
Format: x.xx
surcharge    Surcharge amount.
Format: x.xx
cash_discount    How much less a customer paid due to a cash discount.
Format: x.xx, only applicable to cash and check transactions
tip    The final tip amount, included in the transaction, associated with the purchase
Format: x.xx
currency    The transaction currency. Format: ISO 4217
payment***    The type of payment.
Default: 'creditcard'
Values: 'creditcard', 'check', or 'cash'
processor_id    If using Multiple MIDs, route to this processor (processor_id is obtained under Settings → Transaction Routing in the Control Panel).
authorization_code‡    Specify authorization code. For use with "offline" action only.
dup_seconds    Sets the time in seconds for duplicate transaction checking on supported processors. Set to 0 to disable duplicate checking. This value should not exceed 7862400.
descriptor    Set payment descriptor on supported processors.
descriptor_phone    Set payment descriptor phone on supported processors.
descriptor_address    Set payment descriptor address on supported processors.
descriptor_city    Set payment descriptor city on supported processors.
descriptor_state    Set payment descriptor state on supported processors.
descriptor_postal    Set payment descriptor postal code on supported processors.
descriptor_country    Set payment descriptor country on supported processors.
descriptor_mcc    Set payment descriptor mcc on supported processors.
descriptor_merchant_id    Set payment descriptor merchant id on supported processors.
descriptor_url    Set payment descriptor url on supported processors.
billing_method    Should be set to 'recurring' to mark payment as a recurring transaction or 'installment' to mark payment as an installment transaction.
Values: 'recurring', 'installment'
billing_number    Specify installment billing number, on supported processors. For use when "billing_method" is set to installment.
Values: 0-99
billing_total    Specify installment billing total on supported processors. For use when "billing_method" is set to installment.
order_template    Order template ID.
order_description    Order description.
Legacy variable includes: orderdescription
orderid    Order Id
ipaddress    IP address of cardholder, this field is recommended.
Format: xxx.xxx.xxx.xxx
tax****    The sales tax included in the transaction amount associated with the purchase. Setting tax equal to any negative value indicates an order that is exempt from sales tax.
Default: '0.00'
Format: x.xx
shipping****    Total shipping amount.
ponumber****    Original purchase order.
first_name    Cardholder's first name.
Legacy variable includes: firstname
last_name    Cardholder's last name
Legacy variable includes: lastname
company    Cardholder's company
address1    Card billing address
address2    Card billing address, line 2
city    Card billing city
state    Card billing state.
Format: CC
zip    Card billing zip code
country    Card billing country.
Country codes are as shown in ISO 3166. Format: CC
phone    Billing phone number
fax    Billing fax number
email    Billing email address
social_security_number    Customer's social security number, checked against bad check writers database if check verification is enabled.
drivers_license_number    Driver's license number.
drivers_license_dob    Driver's license date of birth.
drivers_license_state    The state that issued the customer's driver's license.
shipping_firstname    Shipping first name
shipping_lastname    Shipping last name
shipping_company    Shipping company
shipping_address1    Shipping address
shipping_address2    Shipping address, line 2
shipping_city    Shipping city
shipping_state    Shipping state
Format: CC
shipping_zip    Shipping zip code
shipping_country    Shipping country
Country codes are as shown in ISO 3166. Format: CC
shipping_email    Shipping email address
merchant_defined_field_#    You can pass custom information in up to 20 fields.
Format: merchant_defined_field_1=Value
customer_receipt    If set to true, when the customer is charged, they will be sent a transaction receipt.
Values: 'true' or 'false'
signature_image    Cardholder signature image. For use with "sale" and "auth" actions only.
Format: base64 encoded raw PNG image. (16kiB maximum)
cardholder_auth‡‡    Set 3D Secure condition. Value used to determine E-commerce indicator (ECI).
Values: 'verified' or 'attempted'
cavv‡‡    Cardholder authentication verification value.
Format: base64 encoded
xid‡‡    Cardholder authentication transaction id.
Format: base64 encoded
three_ds_version‡‡    3DSecure version.
Examples: "2.0.0" or "2.2.0"
directory_server_id    Directory Server Transaction ID. May be provided as part of 3DSecure 2.0 authentication.
Format: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
source_transaction_id    Specifies a payment gateway transaction id in order to associate payment information with a Subscription or Customer Vault record. Must be set with a 'recurring' or 'customer_vault' action.
pinless_debit_override    Set to 'Y' if you have Pinless Debit Conversion enabled but want to opt out for this transaction. Feature applies to selected processors only.
Recurring specific fields
recurring    Recurring action to be processed.
Values: add_subscription
plan_id    Create a subscription tied to a Plan ID if the sale/auth transaction is successful.
plan_payments    The number of payments before the recurring plan is complete.
Note: Use '0' for 'until canceled'
plan_amount    The plan amount to be charged each billing cycle.
Format: x.xx
day_frequency    How often, in days, to charge the customer. Cannot be set with 'month_frequency' or 'day_of_month'.
month_frequency    How often, in months, to charge the customer. Cannot be set with 'day_frequency'. Must be set with 'day_of_month'.
Values: 1 through 24
day_of_month    The day that the customer will be charged. Cannot be set with 'day_frequency'. Must be set with 'month_frequency'.
Values: 1 through 31 - for months without 29, 30, or 31 days, the charge will be on the last day
start_date    The first day that the customer will be charged.
Format: YYYYMMDD
Customer Vault specific fields
customer_vault    Associate payment information with a Customer Vault record if the transaction is successful.
Values: 'add_customer' or 'update_customer'
customer_vault_id    Specifies a Customer Vault id. If not set, the payment gateway will randomly generate a Customer Vault id.
Stored Credentials (CIT/MIT)
initiated_by    Who initiated the transaction.
Values: 'customer' or 'merchant'
initial_transaction_id    Original payment gateway transaction id.
stored_credential_indicator    The indicator of the stored credential.
Values: 'stored' or 'used'
Use 'stored' when processing the initial transaction in which you are storing a customer's payment details (customer credentials) in the Customer Vault or other third-party payment storage system.
Use 'used' when processing a subsequent or follow-up transaction using the customer payment details (customer credentials) you have already stored to the Customer Vault or third-party payment storage method.
Level III specific order fields
shipping†    Freight or shipping amount included in the transaction amount.
Default: '0.00'
Format: x.xx
tax†    The sales tax, included in the transaction amount, associated with the purchase. Setting tax equal to any negative value indicates an order that is exempt from sales tax.
Default: '0.00'
Format: x.xx
ponumber†    Purchase order number supplied by cardholder
orderid†    Identifier assigned by the merchant. This defaults to gateway transaction id.
shipping_country†    Shipping country (e.g. US)
Format: CC
shipping_postal†    Postal/ZIP code of the address where purchased goods will be delivered. This field can be identical to the 'ship_from_postal' if the customer is present and takes immediate possession of the goods.
ship_from_postal†    Postal/ZIP code of the address from where purchased goods are being shipped, defaults to merchant profile postal code.
summary_commodity_code†    4 character international description code of the overall goods or services being supplied. The acquirer or processor will provide a list of current codes.
duty_amount    Amount included in the transaction amount associated with the import of purchased goods.
Default: '0.00'
Format: x.xx
discount_amount    Amount included in the transaction amount of any discount applied to complete order by the merchant.
Default: '0.00'
Format: x.xx
national_tax_amount    The national tax amount included in the transaction amount.
Default: '0.00'
Format: x.xx
alternate_tax_amount    Second tax amount included in the transaction amount in countries where more than one type of tax can be applied to the purchases.
Default: '0.00'
Format: x.xx
alternate_tax_id    Tax identification number of the merchant that reported the alternate tax amount.
vat_tax_amount    Contains the amount of any value added taxes which can be associated with the purchased item.
Default: '0.00'
Format: x.xx
vat_tax_rate    Contains the tax rate used to calculate the sales tax amount appearing. Can contain up to 2 decimal places, e.g. 1% = 1.00.
Default: '0.00'
Format: x.xx
vat_invoice_reference_number    Invoice number that is associated with the VAT invoice.
customer_vat_registration    Value added tax registration number supplied by the cardholder.
merchant_vat_registration    Government assigned tax identification number of the merchant for whom the goods or services were purchased from.
order_date    Purchase order date, defaults to the date of the transaction.
Format: YYMMDD
Level III specific line item detail fields
item_product_code_#†    Merchant defined description code of the item being purchased.
item_description_#†    Description of the item(s) being supplied.
item_commodity_code_#†    International description code of the individual good or service being supplied. The acquirer or processor will provide a list of current codes.
item_unit_of_measure_#†    Code for units of measurement as used in international trade.
Default: 'EACH'
item_unit_cost_#†    Unit cost of item purchased, may contain up to 4 decimal places.
item_quantity_#†    Quantity of the item(s) being purchased.
Default: '1'
item_total_amount_#†    Purchase amount associated with the item. Defaults to: 'item_unit_cost_#' x 'item_quantity_#' rounded to the nearest penny.
item_tax_amount_#†    Amount of sales tax on specific item. Amount should not be included in 'total_amount_#'.
Default: '0.00'
Format: x.xx
item_tax_rate_#†    Percentage representing the value-added tax applied.
Default: '0.00'
item_discount_amount_#    Discount amount which can have been applied by the merchant on the sale of the specific item. Amount should not be included in 'total_amount_#'.
item_discount_rate_#    Discount rate for the line item. 1% = 1.00.
Default: '0.00'
item_tax_type_#    Type of value-added taxes that are being used.
item_alternate_tax_id_#    Tax identification number of the merchant that reported the alternate tax amount.
Payment Facilitator Specific Fields
payment_facilitator_id‡‡‡    Payment Facilitator/Aggregator/ISO's ID Number
submerchant_id‡‡‡    Sub-merchant Account ID
submerchant_name‡‡‡    Sub-merchant's Name
submerchant_address‡‡‡    Sub-merchant's Address
submerchant_city‡‡‡    Sub-merchant's City
submerchant_state‡‡‡    Sub-merchant's State
submerchant_postal‡‡‡    Sub-merchant's Zip/Postal Code
submerchant_country‡‡‡    Sub-merchant's Country
submerchant_phone‡‡‡    Sub-merchant's Phone Number
submerchant_email‡‡‡    Sub-merchant's Email Address
*    Always required
**    Required for credit card transactions
***    Required for ACH transactions
****    Required for Level 2 transactions
†    Required for Level 3 transactions
‡    Required for offline transactions
‡‡    Required for 3D Secure transactions
‡‡‡    Required fields for Payment Facilitator enabled transactions vary by card brand
‡‡‡‡    Required for API transactions using the Kount Service
Notes:
Level II fields are required for Level II processing.
Level II and Level III fields are required for Level III processing.
You can pass only credit card or e-check transaction variables in a request, not both in the same request.
Certain banks may require some optional fields.
Some characters output from base64 encoding can not be passed directly into the API (i.e. "+") so ensure these fields are also properly URL encoded.
Capture
Variable Name    Description
type*    Type of transaction.
Values: 'capture'
security_key*    API Security Key assigned to a merchant account.
New keys can be generated from the merchant control panel in Settings > Security Keys
transactionid*    Original payment gateway transaction id
amount*    Total amount to be settled. This amount must be equal to or less than the original authorized amount.
Format: x.xx
tracking_number    Shipping tracking number
shipping_carrier    Shipping carrier.
Values: 'ups', 'fedex', 'dhl', or 'usps'
orderid    Order id.
signature_image    Cardholder signature image.
Format: base64 encoded raw PNG image. (16kiB maximum)
*    Always required
Void
Variable Name    Description
type*    Type of transaction.
Values: 'void'
security_key*    API Security Key assigned to a merchant account.
New keys can be generated from the merchant control panel in Settings > Security Keys
transactionid*    Original payment gateway transaction id
void_reason**    Reason the EMV transaction is being voided.
Values: 'fraud', 'user_cancel', 'icc_rejected', 'icc_card_removed', 'icc_no_confirmation', or 'pos_timeout'
payment***    The type of payment.
Default: 'creditcard'
Values: 'creditcard' or 'check'
*    Always required
**    Conditionally required for EMV transactions
***    Required for ACH transactions
Refund
Variable Name    Description
type*    Type of transaction.
Values: 'refund'
security_key*    API Security Key assigned to a merchant account.
New keys can be generated from the merchant control panel in Settings > Security Keys
transactionid*    Original payment gateway transaction id
amount    Total amount to be refunded. This amount may be equal to or less than the settled amount. Setting the amount to 0.00 will refund the entire amount.
Format: x.xx
payment**    The type of payment.
Default: 'creditcard'
Values: 'creditcard' or 'check'
*    Always required
**    Required for ACH transactions
Update
Variable Name    Description
type*    Type of transactions.
Values: 'update'
security_key*    API Security Key assigned to a merchant account.
New keys can be generated from the merchant control panel in Settings > Security Keys
transactionid*    Original payment gateway transaction id
payment**    The type of payment.
Default: 'creditcard'
Values: 'creditcard' or 'check'
tracking_number    Shipping tracking number
shipping    Total shipping amount.
Format: x.xx
shipping_postal    Postal/ZIP code of the address where purchased goods will be delivered. This field can be identical to the 'ship_from_postal' if the customer is present and takes immediate possession of the goods.
ship_from_postal    Postal/ZIP code of the address from where purchased goods are being shipped, defaults to merchant profile postal code.
shipping_country    Shipping Country Code.
shipping_carrier    Shipping carrier.
Values: 'ups', 'fedex', 'dhl', or 'usps'
shipping_date    Shipping date.
Format: YYYYMMDD
order_description    Order Description.
Legacy variable includes: orderdescription
order_date    Order date.
Format: YYYYMMDD
customer_receipt    If set to true, when the customer is charged, they will be sent a transaction receipt.
Values: 'true' or 'false'
signature_image    Cardholder signature image.
Format: base64 encoded raw PNG image. (16kiB maximum)
ponumber    Cardholder's purchase order number.
summary_commodity_code    4 character international description code of the overall goods or services being supplied. The acquirer or processor will provide a list of current codes.
duty_amount    Amount included in the transaction amount associated with the import of purchased goods.
Format: x.xx
discount_amount    Amount included in the transaction amount of any discount applied to complete order by the merchant.
Format: x.xx
tax    The sales tax, included in the transaction amount, associated with the purchase. Setting tax equal to any negative value indicates an order that is exempt from sales tax.
Default: '0.00'
Format: x.xx
national_tax_amount    The national tax amount included in the transaction amount.
Format: x.xx
alternate_tax_amount    Second tax amount included in the transaction amount in countries where more than one type of tax can be applied to the purchases.
Format: x.xx
alternate_tax_id    Tax identification number of the merchant that reported the alternate tax amount.
vat_tax_amount    Contains the amount of any value added taxes which can be associated with the purchased item.
vat_tax_rate    Contains the tax rate used to calculate the sales tax amount appearing. Can contain up to 2 decimal places, e.g. 1% = 1.00.
vat_invoice_reference_number    Invoice number that is associated with the VAT invoice.
customer_vat_registration    Value added tax registration number supplied by the cardholder.
merchant_vat_registration    Government assigned tax identification number of the merchant for whom the goods or services were purchased from.
merchant_defined_field_#    Merchant Defined Fields.
Format: merchant_defined_field_1=Value
*    Always required
**    Required for ACH transactions
ChatGPT
Thank you for providing detailed information on the transaction variables for the Payarc Payment API, as well as specific variables for various transaction types. This information is crucial for developers and merchants looking to integrate with the Payarc Payment Gateway.
    
Here's a summary of some key points:
    
Common Variables for Payment API:
    
type: Specifies the type of transaction (e.g., 'sale,' 'auth,' 'credit,' 'validate,' or 'offline').
security_key: API Security Key assigned to a merchant account.
payment_token: Tokenized version of the customer's card or check information.
transaction_session_id: A single-use session ID used by Kount for transaction and Data Collector linkage.
googlepay_payment_data: Encrypted token created when integrating with Google Pay.
ccnumber, ccexp, cvv: Credit card details (required for credit card transactions).
checkname, checkaba, checkaccount: ACH (Automated Clearing House) payment details (required for ACH transactions).
amount: Total amount to be charged.
surcharge, cash_discount, tip, currency: Additional payment details.
ipaddress: IP address of the cardholder.
tax, shipping, ponumber, order_description, orderid: Order-related information.
first_name, last_name, company, address1, address2, city, state, zip, country, phone, fax, email: Billing and shipping information.
merchant_defined_field_#: Custom fields for passing merchant-defined information.
customer_receipt: If set to true, sends a transaction receipt to the customer.
Transaction-Specific Variables:
    
Depending on the transaction type (e.g., 'capture,' 'void,' 'refund,' 'update'), specific variables and their requirements vary. For example:
transactionid: Original payment gateway transaction id (required for 'capture,' 'void,' 'refund,' 'update').
void_reason: Reason for voiding a transaction (required for 'void').
tracking_number, shipping_carrier, shipping_date: Shipping-related details (used in 'update').
payment_facilitator_id, submerchant_id, submerchant_name: Payment Facilitator-specific fields.
Recurring and Customer Vault Variables:
    
For recurring payments and Customer Vault integration, there are additional fields to manage subscriptions and customer information.
Level II and Level III Variables:
    
These fields are required for Level II and Level III processing, which includes additional order and line item detail fields.
3D Secure Variables:
    
Variables like cardholder_auth, cavv, xid, and three_ds_version are used for 3D Secure authentication.
Payment Facilitator-Specific Variables:
    
Payment Facilitator fields such as payment_facilitator_id, submerchant_id, and others are used for payment facilitator transactions.
It's important to note that the required and optional variables may vary based on the transaction type and specific use cases. Developers and merchants should refer to the Payarc documentation and guidelines for the most up-to-date information and integration instructions. Additionally, it's essential to handle sensitive payment data securely and comply with relevant data security standards, such as PCI DSS (Payment Card Industry Data Security Standard), when working with payment information.
    
    
    
    
User
Customer Vault Variables
Payment API
POST URL
POST URL:    https://payarc.transactiongateway.com/api/transact.php
Add/Update Customer Record
Variables    Description
customer_vault*    Add/Update a secure Customer Vault record.
Values: 'add_customer' or 'update_customer'
customer_vault_id    Specifies a Customer Vault id. If not set, the payment gateway will randomly generate a Customer Vault id.
billing_id    Billing id to be assigned or updated. If none is provided, one will be created or the billing id with priority '1' will be updated.
security_key*    API Security Key assigned to a merchant account.
New keys can be generated from the merchant control panel in Settings > Security Keys
payment_token    The tokenized version of the customer's card or check information. This will be generated by Collect.js and is usable only once.
googlepay_payment_data    The encrypted token created when integration directly to the Google Pay SDK.
ccnumber**    Credit card number.
ccexp**    Credit card expiration.
Format: MMYY
checkname***    The name on the customer's ACH account.
checkaba***    The customer's bank routing number.
checkaccount***    The customer's bank account number.
account_holder_type    The customer's ACH account entity.
Values: 'personal' or 'business'
account_type    The customer's ACH account type.
Values: 'checking' or 'savings'
sec_code    ACH standard entry class codes.
Values: 'PPD', 'WEB', 'TEL', or 'CCD'
currency    Set transaction currency.
payment    Set payment type to ACH or credit card.
Values: 'creditcard' or 'check'
orderid    Order id
order_description    Order Description
Legacy variable includes: orderdescription
merchant_defined_field_#    Can be set up in merchant control panel under 'Settings'->'Merchant Defined Fields'.
Format: merchant_defined_field_1=Value
first_name    Cardholder's first name.
Legacy variable includes: firstname
last_name    Cardholder's last name.
Legacy variable includes: lastname
address1    Card billing address.
city    Card billing city
state    Card billing state.
zip    Card billing postal code.
country    Card billing country code.
phone    Billing phone number.
email    Billing email address.
company    Cardholder's company.
address2    Card billing address, line 2.
fax    Billing fax number.
shipping_id    Shipping entry id. If none is provided, one will be created or the billing id with priority '1' will be updated.
shipping_firstname    Shipping first name.
shipping_lastname    Shipping last name.
shipping_company    Shipping company.
shipping_address1    Shipping address.
shipping_address2    Shipping address, line 2.
shipping_city    Shipping city
shipping_state    Shipping state.
shipping_zip    Shipping postal code.
shipping_country    Shipping country code.
shipping_phone    Shipping phone number.
shipping_fax    Shipping fax number.
shipping_email    Shipping email address.
source_transaction_id    Specifies a payment gateway transaction id in order to associate payment information with a Customer Vault record.
acu_enabled    If set to true, credit card will be evaluated and sent based upon Automatic Card Updater settings. If set to false, credit card will not be submitted for updates when Automatic Card Updater runs.
Default: 'true'
Values: 'true' or 'false'
*    Always required
**    Required for credit card transactions
***    Required for ACH transactions
Customer Vault initiated Sale/Auth/Credit/Offline
Variable    Description
security_key*    API Security Key assigned to a merchant account.
New keys can be generated from the merchant control panel in Settings > Security Keys
customer_vault_id*    Specifies a Customer Vault id.
amount    Total amount to be charged. For validate, the amount must be omitted or set to 0.00.
Format: x.xx
currency    The transaction currency. Format: ISO 4217
processor_id    If using Multiple MIDs, route to this processor (processor_id is obtained under Settings->Transaction Routing in the Control Panel).
descriptor    Set payment descriptor on supported processors.
descriptor_phone    Set payment descriptor phone on supported processors.
order_description    Order description.
Legacy variable includes: orderdescription
orderid    Order ID
Stored Credentials (CIT/MIT)
initiated_by    Who initiated the transaction.
Values: 'customer' or 'merchant'
initial_transaction_id    Original payment gateway transaction id.
stored_credential_indicator    The indicator of the stored credential.
Values: 'stored' or 'used'
Use 'stored' when processing the initial transaction in which you are storing a customer's payment details (customer credentials) in the Customer Vault or other third-party payment storage system.
Use 'used' when processing a subsequent or follow-up transaction using the customer payment details (customer credentials) you have already stored to the Customer Vault or third-party payment storage method.
*    Always required
Delete Customer Record
Variable    Description
customer_vault*    Deletes a secure Customer Vault record.
Values: 'delete_customer'
customer_vault_id*    Specifies a Customer Vault id.
security_key*    API Security Key assigned to a merchant account.
New keys can be generated from the merchant control panel in Settings > Security Keys
*    Always required
Notes:
If you do not pass a customer_vault_id, our system will randomly generate one. If you include a customer_id and customer_vault_id, they must match.
You can only pass Credit Card or Electronic Check transaction variables.
ChatGPT
Thank you for providing detailed information on the Customer Vault variables for the Payarc Payment API. Customer Vault functionality is important for securely storing and managing customer payment information, providing a convenient and secure way for merchants to handle recurring payments and customer transactions.
    
Here's a summary of the key variables and actions related to the Customer Vault:
    
Add/Update Customer Record:
    
customer_vault: Specifies whether you want to add or update a Customer Vault record ('add_customer' or 'update_customer').
customer_vault_id: Specifies a Customer Vault ID. If not provided, the system will generate one.
billing_id: Billing ID to be assigned or updated. If not provided, the system will create one or update the billing ID with priority '1'.
security_key: API Security Key assigned to a merchant account.
Payment information variables like payment_token, googlepay_payment_data, ccnumber, ccexp, checkname, checkaba, checkaccount, etc., depending on the payment method (credit card or ACH).
Billing and shipping details like first_name, last_name, address1, city, state, zip, country, phone, email, company, etc.
source_transaction_id: Specifies a payment gateway transaction ID to associate payment information with a Customer Vault record.
acu_enabled: If set to true, credit card details will be evaluated and sent based on Automatic Card Updater settings.
Customer Vault initiated Sale/Auth/Credit/Offline:
    
security_key: API Security Key assigned to a merchant account.
customer_vault_id: Specifies a Customer Vault ID.
amount: Total amount to be charged. For validation, set it to 0.00.
currency: The transaction currency in ISO 4217 format.
processor_id: If using Multiple MIDs, route to this processor (obtained under Settings->Transaction Routing in the Control Panel).
Payment descriptor and order-related fields.
Stored Credentials (CIT/MIT):
    
initiated_by: Specifies who initiated the transaction ('customer' or 'merchant').
initial_transaction_id: Original payment gateway transaction ID.
stored_credential_indicator: Indicates the stored credential status ('stored' or 'used').
Delete Customer Record:
    
customer_vault: Specifies that you want to delete a Customer Vault record ('delete_customer').
customer_vault_id: Specifies the Customer Vault ID to be deleted.
security_key: API Security Key assigned to a merchant account.
It's important to note that you can only pass credit card or electronic check transaction variables, not both, in the same request. Additionally, the presence of certain variables may depend on the specific use case and payment method.
    
Developers and merchants should refer to the Payarc documentation and guidelines for the most up-to-date information and integration instructions when working with the Customer Vault functionality.
    
    
    
    
User
Transaction Response Variables
Payment API
Standard Response
Variable Name    Description
response    1 = Transaction Approved
2 = Transaction Declined
3 = Error in transaction data or system error
responsetext    Textual response
authcode    Transaction authorization code.
transactionid    Payment gateway transaction id.
avsresponse    AVS response code (See AVS Response Codes).
cvvresponse    CVV response code (See See CVV Response Codes).
orderid    The original order id passed in the transaction request.
response_code    Numeric mapping of processor responses (See See Result Code Table).
emv_auth_response_data    This will optionally come back when any chip card data is provided on the authorization. This data needs to be sent back to the SDK after an authorization.
    
    
Conditional Response
Variable Name    Description
customer_vault_id    The original customer_vault_id passed in the transaction request or the resulting customer_vault_id created on an approved transaction.
    
Note: Only returned when the "Customer Vault" service is active.
kount_score    The Kount "Omniscore" indicating the level of risk on a given transaction. The higher the score, the lower the risk.
    
Note: Only returned when the "Kount" service is active.
ChatGPT
The Transaction Response Variables for the Payarc Payment API provide information about the outcome and details of a transaction. Here are the key variables:
    
Standard Response Variables:
    
response: This variable indicates the result of the transaction.
    
1 = Transaction Approved
2 = Transaction Declined
3 = Error in transaction data or system error
responsetext: Textual response describing the transaction outcome.
    
authcode: Transaction authorization code.
    
transactionid: Payment gateway transaction ID.
    
avsresponse: AVS (Address Verification System) response code. AVS is used to verify the address provided by the cardholder during the transaction. Response codes indicate how well the address matches the cardholder's address on file.
    
See AVS Response Codes for specific code meanings.
cvvresponse: CVV (Card Verification Value) response code. CVV is used to verify the card's security code (usually located on the back of the card).
    
See CVV Response Codes for specific code meanings.
orderid: The original order ID passed in the transaction request.
    
response_code: Numeric mapping of processor responses. This code provides additional information about the transaction result.
    
See Result Code Table for specific code meanings.
emv_auth_response_data: This optional variable comes back when any chip card data is provided on the authorization. This data should be sent back to the SDK after an authorization.
    
Conditional Response Variables:
    
customer_vault_id: This variable is returned when the "Customer Vault" service is active. It either indicates the original customer_vault_id passed in the transaction request or the customer_vault_id created on an approved transaction.
    
kount_score: The Kount "Omniscore," which indicates the level of risk associated with a given transaction. A higher score typically indicates a lower level of risk. This variable is returned when the "Kount" service is active.
    
These response variables are crucial for understanding the outcome of a transaction, providing feedback to the user, and for record-keeping purposes. Merchants and developers can use this information to take appropriate actions based on the transaction's result. Additionally, the presence of some variables may depend on the specific services and features enabled for the payment gateway.
    
    
    
    
User
Credential on File Information
Payment API
Please note the below is meant to be a guide for how the platform supports CIT and MIT use cases. This is not meant to be an exhaustive list of items needed in order to be compliant. For more information on CIT/MIT compliance, please consult your processor.
Credential on File regulations apply any time data is stored to process future purchases for a cardholder.
    
    
Customer vs Merchant Initiated
When a customer is actively engaged in checkout - either physically present in a store, or checking out online in their browser, that is a Customer Initiated Transaction (CIT).
    
When the customer isn’t actively engaged, but has given permission for their card to be charged, that is a Merchant Initiated Transaction (MIT). In order for a merchant to submit a Merchant Initiated Transaction, a Customer Initiated transaction is required first.
    
Overview
A cardholder’s consent is required for the initial storage of credentials. When a card is stored, an initial transaction should be submitted (Validate, Sale, or Auth) with the correct credential-on-file type. The transaction must be approved (not declined or encounter an error.) Then, store the transaction ID of the initial customer initiated transaction. The transaction ID must then be submitted with any follow up transactions (MIT or CIT.)
    
Credential on File types include Recurring, Installment, and Unscheduled types.
    
For simplicity - we are using the Payment API variables. These match the names of the Batch Upload, Collect.js, Browser Redirect, or the Customer-Present Cloud APIs. The Three-Step API follows the same pattern, and the variables should be submitted on Step 1.
    
Request Details
Variable    Description
initiated_by    Who initiated the transaction.
Values: 'customer' or 'merchant'
initial_transaction_id    Original payment gateway transaction id.
stored_credential_indicator    The indicator of the stored credential.
Values: 'stored' or 'used'
Use 'stored' when processing the initial transaction in which you are storing a customer's payment details (customer credentials) in the Customer Vault or other third-party payment storage system.
Use 'used' when processing a subsequent or follow-up transaction using the customer payment details (customer credentials) you have already stored to the Customer Vault or third-party payment storage method.
Response Details
Variable    Description
cof_supported    Credential on File support indicator specific to the transaction.
Values: 'stored' or 'used'
Value will be 'stored' if CIT/MIT transaction was sent to a processor that supports the feature.
Value will be 'used' if CIT/MIT transaction was sent to a processor that does not support the feature or if a merchant-initiated transaction cannot occur due to Cross-Processor limitations.
Please Note: For Three-Step Redirect transactions, the request details must be sent in Step 1 and the ‘cof-supported’ element will be returned in the response of Step 3.
    
Referencing the Initial Transaction:
When doing a credential-on-file type transaction, we will reject any follow up transactions that pass in a card number that does not match the card brand used in the initial transaction. For example, using a Mastercard when the original transaction uses Visa will result in the transaction getting rejected. The card brands each have independent systems for tracking card-on-file transactions, so an initial transaction ID cannot be reused between them. We reject this type of incorrect reuse at the time of the request because it can result in settlement failures, downgrades, etc. later.
    
If a customer changes their card on file, a good practice is to first store it as a new initial transaction, and reference that initial transaction ID for future payments on the new card.
    
Recurring:
    
A transaction in a series of transactions that uses a stored credential and are processed at fixed, regular intervals (not to exceed one year between transactions), and represents cardholder agreement for the merchant to initiate future transactions for the purchase of goods or services provided at regular intervals.
If a customer is signing up for a recurring subscription, the merchant is expected to send "an initial recurring transaction" every time the customer signs up for a new recurring subscription.
    
For an initial transaction:
    
For a free trial, the initial transaction will be a validate transaction type (or auth if validate is not supported.)
If the customer is being charged immediately for a product, the initial transaction will be a sale or an authorization for the correct amount.
Either transaction MUST INCLUDE three items:
    
billing_method=recurring
initiated_by=customer
stored_credential_indicator=stored
Examples
Example 1: In this request, an initial recurring sale is sent and an approved transaction is returned in the response. Store this transaction for the follow up request.
    
Request    ...type=sale&billing_method=recurring&initiated_by=customer&stored_credential_indicator=stored...
Response    ...response=1&responsetext=Approved&transactionid=1234567890...
The transaction ID would be stored and submitted on follow up transactions. The follow up transaction(s) would include:
    
billing_method=recurring
initiated_by=merchant
stored_credential_indicator=used
initial_transaction_id=XXXXXXXXXX
Example 2: In this request, the subsequent merchant initiated sale is processed using the stored transaction from Example 1.
    
Request    ...type=sale&billing_method=recurring&initiated_by=merchant&stored_credential_indicator=used&initial_transaction_id=1234567890...
Response    ...response=1&responsetext=Approved&transactionid=1234567891...
Please Note: This transaction ID cannot be used for "unscheduled" or "installment" credential-on-file transactions.
    
Installment:
    
An “installment” transaction is a series of transactions that uses a stored credential and represents cardholder agreement with the merchant to initiate one or more future transactions over a period of time for a single purchase of goods or services.
Installment transactions work just like Recurring in that you need a customer initiated transaction for a subsequent installment transaction. The difference is the billing_method will be “installment”.
    
The customer initiated transaction MUST INCLUDE at least three items (* recommended to send, if available):
    
billing_method=installment
initiated_by=customer
stored_credential_indicator=stored
* billing_total
* billing_number (Values: 0-99)
Examples
Example 3: In this request, an initial installment sale is sent and an approved transaction is returned in the response. Store this transaction for the follow up request.
    
Request    ...type=sale&billing_method=installment&initiated_by=customer&stored_credential_indicator=stored&billing_total=100.00&billing_number=1&amount=25.00...
Response    ...response=1&responsetext=Approved&transactionid=1234567890…
The transaction ID would be stored and submitted on follow up transactions. The follow up transaction(s) would include (* recommended to send, if available):
    
billing_method=installment
initiated_by=merchant
stored_credential_indicator=used
initial_transaction_id=XXXXXXXXXX
* billing_total
* billing_number
Example 4: In this request, the subsequent merchant initiated sale is processed using the stored transaction from Example 3.
    
Request    ...type=sale&billing_method=installment&initiated_by=merchant&stored_credential_indicator=used&initial_transaction_id=1234567890&billing_total=100.00&billing_number=1&amount=25.00...
Response    ...response=1&responsetext=Approved&transactionid=1234567891…
    
Please Note: This transaction ID cannot be used for "unscheduled" or "recurring" card on file transactions.
    
Unscheduled Credential On File:
    
For payments that aren’t recurring or installment - there are unscheduled options as well.
The first customer initiated transaction will include these two items (no billing method):
    
initiated_by=customer
stored_credential_indicator=stored
Examples
Example 5: In this request, an initial unscheduled sale is sent and an approved transaction is returned in the response. Store this transaction for the follow up request.
    
Request    ...type=sale&initiated_by=customer&stored_credential_indicator=stored...
Response    ...response=1&responsetext=Approved&transactionid=1234567890...
The transaction ID can be used, without a billing method, for a customer initiated or merchant initiated transaction.
    
Please Note: The transaction ID cannot be used for a “recurring” or “installment” transaction.
    
Unscheduled, Customer Initiated: A card-absent transaction initiated by the cardholder where the cardholder does not need to enter their card details as the merchant uses the payment credential previously stored by the cardholder to perform the transaction. Examples include a transaction using customer’s merchant profile or digital wallet.
    
This is your typical shopping cart scenario where the customer checks out without having to re-enter their card details.
    
The follow up transaction(s) would include:
    
initiated_by=customer
stored_credential_indicator=used
Example 6: In this request, a subsequent unscheduled sale is sent and an approved transaction is returned in the response.
    
Request    ...type=sale&initiated_by=customer&stored_credential_indicator=used...
Response    ...response=1&responsetext=Approved&transactionid=1234567891...
Unscheduled, Merchant Initiated: A transaction using a stored credential for a fixed or variable amount that does not occur on a scheduled or regularly occurring transaction date, where the cardholder has provided consent for the merchant to initiate one or more future transactions. An example of this transaction is an account auto-top up transaction.
    
An example of an account auto-top up would be a customer with an account with a balance. When that balance gets low, the customer's card is charged automatically, without the customer's involvement.
    
The follow up transaction(s) would include:
    
initiated_by=merchant
stored_credential_indicator=used
initial_transaction_id=XXXXXXXXXX
Example 7: In this request, a subsequent unscheduled sale is sent and an approved transaction is returned in the response.
    
Request    ...type=sale&initiated_by=merchant&stored_credential_indicator=used&initial_transaction_id=1234567890...
Response    ...response=1&responsetext=Approved&transactionid=1234567892...
    
Appendix 1: Recommend Further Reading:
If there is any question where a transaction type falls, we recommend reviewing the official card brand documentation. Visa’s guidelines are the most stringent, and generally if you follow those guidelines, you’ll also be compliant for MasterCard, American Express and Discover.
    
Visa:
https://usa.visa.com/dam/VCOM/global/support-legal/documents/stored-credential-transaction-framework-vbs-10-may-17.pdf
    
MasterCard:
https://www.mastercard.us/content/dam/public/mastercardcom/na/us/en/banks-and-credit-unions/other/credential-on-file-the-digital-commerce-growth-engine.pdf
ChatGPT
The provided information outlines guidelines and requirements for handling Credential on File (CIT) and Merchant Initiated Transaction (MIT) scenarios when using the Payment API. These guidelines are important for compliance with regulations related to storing cardholder data for future transactions. Below is a summary of the key points for each scenario:
    
Customer vs. Merchant Initiated:
    
Customer Initiated Transaction (CIT): This occurs when a customer is actively engaged in the checkout process, either in a physical store or online in their browser.
Merchant Initiated Transaction (MIT): This occurs when the customer has given permission for their card to be charged but is not actively engaged in the checkout process. MIT requires a prior CIT transaction.
Overview:
    
Cardholder's consent is required for initial storage of credentials.
An initial transaction (Validate, Sale, or Auth) with the correct credential-on-file type must be submitted and approved for storing cardholder data.
The transaction ID of the initial CIT transaction must be stored and used in follow-up transactions (MIT or CIT).
Credential on File Types:
    
Recurring:
    
Represents a series of transactions processed at fixed, regular intervals (not exceeding one year).
Initial transaction for recurring:
For free trials: Use a validate transaction (or auth if validate is not supported).
For immediate charges: Use a sale or authorization.
Include billing_method=recurring, initiated_by=customer, and stored_credential_indicator=stored.
Installment:
    
Represents a series of transactions over time for a single purchase.
Initial transaction for installment:
Include billing_method=installment, initiated_by=customer, and stored_credential_indicator=stored.
Recommended to include billing_total and billing_number (values: 0-99).
Unscheduled Credential On File:
    
Used for payments that aren't recurring or installment.
The first customer-initiated transaction includes initiated_by=customer and stored_credential_indicator=stored.
Follow-up Transactions for Recurring and Installment:
    
Follow-up transactions must include billing_method=recurring or billing_method=installment, initiated_by=merchant, stored_credential_indicator=used, and initial_transaction_id (the ID of the initial customer-initiated transaction).
Include billing_total and billing_number for installment transactions.
Unscheduled, Customer Initiated:
    
Used for card-absent transactions initiated by the cardholder, where they don't need to enter card details again.
Follow-up transactions include initiated_by=customer and stored_credential_indicator=used.
Unscheduled, Merchant Initiated:
    
Used for transactions using stored credentials for a fixed or variable amount.
Follow-up transactions include initiated_by=merchant, stored_credential_indicator=used, and initial_transaction_id (the ID of the initial transaction).
It's crucial to follow these guidelines and use the correct parameters to ensure compliance with card brand regulations and to prevent issues with transaction processing. Additionally, refer to the official card brand documentation for Visa, MasterCard, American Express, and Discover for more detailed guidelines on stored credential transactions.
    
    
    
    
User
When a transaction is processed, will a customer vault entry be created automatically?
ChatGPT
The creation of a Customer Vault entry is typically not automatic when a transaction is processed. In most payment gateway systems, you need to explicitly specify whether you want to create or update a Customer Vault entry during the transaction request by setting the appropriate variables or parameters.
    
Here's how it generally works:
    
Customer Vault Entry Creation: If you want to create a new Customer Vault entry for a customer during a transaction, you include the necessary information in your API request. This typically involves setting the "customer_vault" variable to "add_customer" and providing customer details, such as the customer's name, address, payment information (credit card or bank account details), and other relevant data. The payment gateway will then create a new Customer Vault entry for that customer and associate it with the transaction.
    
Customer Vault Entry Update: If you already have an existing Customer Vault entry for a customer, you can update it with new information by setting the "customer_vault" variable to "update_customer" in your transaction request. You would also provide the "customer_vault_id" to specify which Customer Vault entry should be updated. This is useful for scenarios where a customer's payment information changes, and you want to update it in the vault without creating a new entry.
    
No Customer Vault Entry: If you do not include any customer-related variables or parameters in your transaction request, the payment gateway will not create or update a Customer Vault entry automatically.
    
It's essential to refer to the specific API documentation provided by your payment gateway provider (in this case, PayArc) to understand the exact parameters and methods for creating or updating Customer Vault entries within their system. The process may vary slightly between different payment gateway providers and their API implementations.
    
    
    
    
User
So, when I first perform a transaction, I can create a customer vault entry related to the payer. How do I retrieve customer vault information for that same payer during future transactions?
ChatGPT
To retrieve customer vault information for the same payer during future transactions, you typically need to reference the Customer Vault ID associated with that payer. Here's the general process:
    
Create or Update Customer Vault Entry: During the initial transaction, you create or update a Customer Vault entry for the payer, as explained in your previous question. Make sure to store the Customer Vault ID generated by the payment gateway for that entry.
    
Store Customer Vault ID: After the initial transaction, you should store the Customer Vault ID in your own database or system associated with that payer's profile. This allows you to identify the payer in future transactions.
    
Retrieve Customer Vault Information: When you want to perform a future transaction for the same payer, you include the Customer Vault ID in your transaction request to indicate which customer's payment information to use. Here's how you typically do it:
    
Set the "customer_vault" variable to "add_customer" or "update_customer," depending on whether it's a new transaction or an update to an existing entry.
    
Include the "customer_vault_id" variable in your request, and set it to the Customer Vault ID you stored during the initial transaction.
    
Provide other transaction-specific details like the payment amount, payment type, and any additional information required for the specific transaction.
    
By including the Customer Vault ID in your transaction request, you instruct the payment gateway to use the payment information associated with that customer's vault entry for processing the transaction.
    
It's important to note that the exact variable names and API parameters may vary depending on the payment gateway provider you are using (in this case, PayArc). Therefore, it's essential to refer to the specific API documentation provided by PayArc to understand the exact parameters and methods for retrieving customer vault information during future transactions within their system.
    
    
    
    
    
User
Can you please convert this payarc python api code example to use the python requests library:
    
import pycurl
import urllib
import urlparse
import StringIO
    
    
class gwapi():
    
     def __init__(self):
         self.login= dict()
         self.order = dict()
         self.billing = dict()
         self.shipping = dict()
         self.responses = dict()
    
     def setLogin(self,security_key):
         self.login['security_key'] = security_key
    
     def setOrder(self, orderid, orderdescription, tax, shipping, ponumber,ipadress):
         self.order['orderid'] = orderid;
         self.order['orderdescription'] = orderdescription
         self.order['shipping'] = '{0:.2f}'.format(float(shipping))
         self.order['ipaddress'] = ipadress
         self.order['tax'] = '{0:.2f}'.format(float(tax))
         self.order['ponumber'] = ponumber
    
    
     def setBilling(self,
             firstname,
             lastname,
             company,
             address1,
             address2,
             city,
             state,
             zip,
             country,
             phone,
             fax,
             email,
             website):
         self.billing['firstname'] = firstname
         self.billing['lastname'] = lastname
         self.billing['company'] = company
         self.billing['address1'] = address1
         self.billing['address2'] = address2
         self.billing['city']     = city
         self.billing['state']     = state
         self.billing['zip']     = zip
         self.billing['country'] = country
         self.billing['phone']     = phone
         self.billing['fax']     = fax
         self.billing['email']     = email
         self.billing['website'] = website
    
     def setShipping(self,firstname,
             lastname,
             company,
             address1,
             address2,
             city,
             state,
             zipcode,
             country,
             email):
         self.shipping['firstname'] = firstname
         self.shipping['lastname'] = lastname
         self.shipping['company'] = company
         self.shipping['address1'] = address1
         self.shipping['address2'] = address2
         self.shipping['city']     = city
         self.shipping['state']     = state
         self.shipping['zip']     = zipcode
         self.shipping['country'] = country
         self.shipping['email']     = email
    
    
     def doSale(self,amount, ccnumber, ccexp, cvv=''):
    
         query = ""
         # Login Information
    
         query = query + "security_key=" + urllib.quote(self.login['security_key']) + "&"
         # Sales Information
         query += "ccnumber=" + urllib.quote(ccnumber) + "&"
         query += "ccexp=" + urllib.quote(ccexp) + "&"
         query += "amount=" + urllib.quote('{0:.2f}'.format(float(amount))) + "&"
         if (cvv!=''):
             query += "cvv=" + urllib.quote(cvv) + "&"
         # Order Information
         for key,value in self.order.iteritems():
             query += key +"=" + urllib.quote(str(value)) + "&"
    
         # Billing Information
         for key,value in self.billing.iteritems():
             query += key +"=" + urllib.quote(str(value)) + "&"
    
         # Shipping Information
         for key,value in self.shipping.iteritems():
             query += key +"=" + urllib.quote(str(value)) + "&"
    
         query += "type=sale"
         return self.doPost(query)
    
    
    
     def doPost(self,query):
         responseIO = StringIO.StringIO()
         curlObj = pycurl.Curl()
         curlObj.setopt(pycurl.POST,1)
         curlObj.setopt(pycurl.CONNECTTIMEOUT,30)
         curlObj.setopt(pycurl.TIMEOUT,30)
         curlObj.setopt(pycurl.HEADER,0)
         curlObj.setopt(pycurl.SSL_VERIFYPEER,0)
         curlObj.setopt(pycurl.WRITEFUNCTION,responseIO.write);
    
         curlObj.setopt(pycurl.URL,"https://payarc.transactiongateway.com/api/transact.php")
    
         curlObj.setopt(pycurl.POSTFIELDS,query)
    
         curlObj.perform()
    
         data = responseIO.getvalue()
         temp = urlparse.parse_qs(data)
         for key,value in temp.iteritems():
             self.responses[key] = value[0]
         return self.responses['response']
    
# NOTE: your security_key should replace the one below
gw = gwapi()
gw.setLogin("6457Thfj624V5r7WUwc5v6a68Zsd6YEm");
    
gw.setBilling("John","Smith","Acme, Inc.","123 Main St","Suite 200", "Beverly Hills",
         "CA","90210","US","555-555-5555","555-555-5556","support@example.com",
         "www.example.com")
gw.setShipping("Mary","Smith","na","124 Shipping Main St","Suite Ship", "Beverly Hills",
         "CA","90210","US","support@example.com")
gw.setOrder("1234","Big Order",1, 2, "PO1234","65.192.14.10")
    
r = gw.doSale("5.00","4111111111111111","1212",'999')
print gw.responses['response']
    
if (int(gw.responses['response']) == 1) :
     print "Approved"
elif (int(gw.responses['response']) == 2) :
     print "Declined"
elif (int(gw.responses['response']) == 3) :
     print "Error"
ChatGPT
Sure, here's the equivalent Python code using the requests library to perform the same action as the provided pycurl code:
    
python
Copy code
import requests
    
class Gwapi:
    
     def __init__(self):
         self.login = dict()
         self.order = dict()
         self.billing = dict()
         self.shipping = dict()
         self.responses = dict()
    
     def setLogin(self, security_key):
         self.login['security_key'] = security_key
    
     def setOrder(self, orderid, orderdescription, tax, shipping, ponumber, ipadress):
         self.order['orderid'] = orderid
         self.order['orderdescription'] = orderdescription
         self.order['shipping'] = '{0:.2f}'.format(float(shipping))
         self.order['ipaddress'] = ipadress
         self.order['tax'] = '{0:.2f}'.format(float(tax))
         self.order['ponumber'] = ponumber
    
     def setBilling(self, firstname, lastname, company, address1, address2, city, state, zip, country, phone, fax, email, website):
         self.billing['firstname'] = firstname
         self.billing['lastname'] = lastname
         self.billing['company'] = company
         self.billing['address1'] = address1
         self.billing['address2'] = address2
         self.billing['city'] = city
         self.billing['state'] = state
         self.billing['zip'] = zip
         self.billing['country'] = country
         self.billing['phone'] = phone
         self.billing['fax'] = fax
         self.billing['email'] = email
         self.billing['website'] = website
    
     def setShipping(self, firstname, lastname, company, address1, address2, city, state, zipcode, country, email):
         self.shipping['firstname'] = firstname
         self.shipping['lastname'] = lastname
         self.shipping['company'] = company
         self.shipping['address1'] = address1
         self.shipping['address2'] = address2
         self.shipping['city'] = city
         self.shipping['state'] = state
         self.shipping['zip'] = zipcode
         self.shipping['country'] = country
         self.shipping['email'] = email
    
     def doSale(self, amount, ccnumber, ccexp, cvv=''):
         # Login Information
         data = {
             'security_key': self.login['security_key'],
             'ccnumber': ccnumber,
             'ccexp': ccexp,
             'amount': '{0:.2f}'.format(float(amount)),
         }
    
         if cvv:
             data['cvv'] = cvv
    
         # Order Information
         data.update(self.order)
    
         # Billing Information
         data.update(self.billing)
    
         # Shipping Information
         data.update(self.shipping)
    
         data['type'] = 'sale'
    
         return self.doPost(data)
    
     def doPost(self, data):
         url = 'https://payarc.transactiongateway.com/api/transact.php'
         response = requests.post(url, data=data)
    
         # Parse the response
         temp = response.text.split('&')
         for item in temp:
             key, value = item.split('=')
             self.responses[key] = value
    
         return self.responses['response']
    
# NOTE: your security_key should replace the one below
gw = Gwapi()
gw.setLogin("6457Thfj624V5r7WUwc5v6a68Zsd6YEm")
    
gw.setBilling("John", "Smith", "Acme, Inc.", "123 Main St", "Suite 200", "Beverly Hills",
                "CA", "90210", "US", "555-555-5555", "555-555-5556", "support@example.com",
                "www.example.com")
gw.setShipping("Mary", "Smith", "na", "124 Shipping Main St", "Suite Ship", "Beverly Hills",
                 "CA", "90210", "US", "support@example.com")
gw.setOrder("1234", "Big Order", 1, 2, "PO1234", "65.192.14.10")
    
r = gw.doSale("5.00", "4111111111111111", "1212", '999')
print(gw.responses['response'])
    
if int(gw.responses['response']) == 1:
     print("Approved")
elif int(gw.responses['response']) == 2:
     print("Declined")
elif int(gw.responses['response']) == 3:
     print("Error")
This code uses the requests library to make the API request and parse the response, providing the same functionality as the pycurl code you provided. Make sure to replace the security_key with your actual security key.
    
    
    
This took a minute, and from there, I was able to ask any questions about the functionality I needed from the API, generate the exact request strings and Python Request library code needed to make all the requests needed. What would have taken lots of reading and experimentation with a test account, was completed instantly by a few minutes of interaction with GPT. In my experience, it's *always been much faster and simpler, by several orders of magnitude to complete these sorts of integration tasks using GPT as a guide - asking questions about how required logic maps to function call signatures, API request parameters, string concatenations, etc., and then rigorously testing any generated code examples - than it is to try to do so much 1-off assimilation manually. Doing that sort of work is almost always wasted time and brain power, and it's almost never fun. That's why I'm sharing my experiences about using the current crop of LLMs - they've increased my productivity and joy when they're used in these ways, and they leave my brain power, energy, and attention available for more important and challenging innovative work.

posted by:   Nick     15-Oct-2023/11:09:55-7:00



That Meta documentation looks very promising ;-)
It has been written for real people to use. And while providing a lot of answers to possible rising questions, helping them to program in Meta, hoping they will ask questions on the forum.
    
It also shows that the AI is not creative but responsive to the input. But I agree that chatGPT can be very helpful in getting the right information from sources where otherwise I would suffer a complete overload of (irrelevant) information. But you do need to figure out what questions to ask...

posted by:   Arnold     15-Oct-2023/11:39:11-7:00



I mentioned above: 'It does take some time to learn how to interact with it effectively, and clearly it's much more capable in some problem domains, or when applying it's capabilities to some tools over others, etc., but for the time I've spent using it this year, the investment of effort has paid off well.'
    
For the sorts of work GPT excels at, that learning curve often results in 10x-100x productivity increase, not to mention improvement in my bad-mood factor :)
    
I'm surprised at just how dramatically insightful and creative GPT can be - you can get a glimmer of that in how it dealt with the OCR table example - it's creative process included choosing different libraries when a previously chosen library met limitations, evaluating output to recognize when input data formats were inconsistent, handling edge cases in value combination, etc. Not only that, I've been impressed by how the scope of it's attention has been able to keep quite a few simultaneous pieces of a project related, and how it relates current questions to previous questions asked in a single thread. It's very natural to work with, once some basic patterns are understood.
    
I'm going to put together a tutorial about how to integrate GPT into common development workflows, to solve the kinds of tasks that come up repeatedly in all sorts of typical projects. No one should have to do so much of the old school work that we had to do last year :P)

posted by:   Nick     15-Oct-2023/13:13:51-7:00



I can go on and on about how I've used GPT this year. I already mentioned this article in a previous topic, but I think it's appropriate here:
    
https://dev.to/wesen/llms-will-fundamentally-change-software-engineering-3oj8
    
Everything from finding a stupid syntax error in hundreds of lines of code, after you're fatigued from 14 straight hours of coding, to pinpointed where changes need to be made based upon error codes returned by Apache and other common server software, to generating complete detailed solutions consisting of hundreds of lines of code that involve integrating multiple examples from different sources... I had GPT create multiple working revisions of different levels of an authentication process that could use with my little Bottle+PyDAL framework, since Bottle didn't have any Authentication modules/libraries which did what I needed. I walked it through creating an entire process to sign up new users, with verification links sent to new users emails, which then ran routes/functions which GPT generated in the server code, to save the newly verified credentials in encrypted format, in database tables which it generated using PyDAL database code (which could be run in any mainstream database system), etc., all without writing a single line of code manually. I just talked GPT through the requirements. GPT saved my many many hours going through the write/run/debug/edit process which would have been required many many hours with libraries I wasn't familiar with. And along the way, I was able to ask questions about the generated code - how it worked, and how it could be altered in required ways - which is a far more productive and context appropriate way to learn about exactly how to perform the operation you need, than reading many multiple tutorials, lines of API documentation, searching Google, StackOverflow, etc. and assimilating all the necessary understanding. You can use GPT to focus directly on achieving the required goals, and understanding how to use the tools you want to use to achieve those goals, immediately. And you can use it compare solutions generated using many multiple tool choices, along the way, because it knows more than any single person ever could about all the tools available to complete a given task. And you can ask question about the benefits and drawbacks of choosing any given tool, library, etc., just as you would speak with a human.    
    
The article above has more information about how that author has use CoPilot and other LLM tools to dramatically improve work flow. There are thousands of other articles and videos online already which demonstrate other techniques for improved productivity using the current crop of AI tools. I've fully embraced using these tools, and my quality of life (not to mention the scope of projects I can take on more easily, and the improved income potential), the lives of my clients, and the lives of everyone else in my sphere of influence have been greatly improved as a result :)
    
I think in the end, these sorts of tools enable people to step back from decisions about chosen paths of work, and focus more on quality of life. That's always been my main focus throughout life, and I'm grateful every time I find new tools which make it possible for me to have more time and energy to spend making life worthwhile and meaningful overall.

posted by:   Nick     15-Oct-2023/14:10:01-7:00



Please keep in mind, I've always done software development while running at least 1 or 2 other busy businesses simultaneously - and enjoying lots of social life and other hobbies at the same time. I prefer to have lots of time and energy to fully involve myself in all the meaningful things in life, to spend time with loved ones, friends, family, pets, etc., to spend time outdoors, to spend time working out and staying healthy, to spend time involving myself in other creative endeavors which make life meaningful and satisfying. I feel very grateful for any tools which help to reduce necessary work, and allow me to dedicate myself to those other meaningful goals.

posted by:   Nick     15-Oct-2023/14:28:40-7:00



"Kaj, in all our discussions here, you've disparaged some fantastically talented people. Why such strong vitriol?"
    
"Do you really think I could possibly not understand that AI systems are software technologies written using programming languages? That response does come across as a bit unnecessarily condescending."
    
Nick, with all due respect, I think you are being too sensitive. That I am criticising your statements as being too sweeping, and try to formulate that in a clear way, so that others on this public forum can also follow along, does not mean I am condescending to you, disparaging other authors in all our discussions, or spitting vitriol.
    
I have always admired your work, and I admire many others, but that does not mean I agree with everything they say, as I have also worked long and hard to bring some insights of my own to the table.

posted by:   Kaj     15-Oct-2023/16:09:38-7:00



Nice GPT sessions on Meta. Now that is a start! I didn't see it make any mistakes. It's also the first time I see ChatGPT say anything about Meta that isn't utter nonsense.
    
It confirms that Meta now has enough documentation for AI to get a grip on it. There is also a vague indication that it knows Meta is a REBOL language and that it may be using its prior knowledge about REBOL to know how to handle Meta. This is something I was hoping for, and aiming the documentation at. Further, it is able to use its knowledge about CGI in the context of Meta.
    
As you see, I assume all you say is true, which should then mean that the way to work with Meta and AI is already here. And for tasks that Meta is currently capable of, you get clearer and much more performant code than if you let the AI generate, say, Python code, or much more portable code than JavaScript.
    
Seeing that ChatGPT hasn't learned about the past two years yet, and thus hardly about Meta, you should feed it the following things:
- The Meta website,
- Arnold's site,
- The Meta chat forum,
- Perhaps the threads here and on the AtariAge forum.
    
Does ChatGPT not read websites and follow the links, so you could feed it only the Meta site?

posted by:   Kaj     15-Oct-2023/17:43:21-7:00



GPT will not read websites of follow links without using a plugin, and in-place learning is only good for the session, so others users aren't able to take advantage of what's been learned by feeding it documentation in your own sessions. So, GPT5 should be ready to learn all about Meta - have lots of docs and examples online before that cutoff, if you want future GPT users to be able to work with Meta without having to do any in-place training.
    
GPT made a connection between Rebol and Meta because I made that association during the in-place learning during the session. It does know about CGI, so is able to apply what it knows about that topic to what it learns about CGI in Meta.
    
Keep in mind GPT isn't the only LLM. Others are becoming much more capable - some recent models trained specifically to generate code surpass the capabilities of GPT3.5 (which is very good). You can fine tune the training of those others, or even set up others to maintain in-place learning about Meta, and then use toolchains including those organized around Langchain to set up your own service to answer Meta questions and generate Meta code. I set up such a system, for example, to answer questions about all the docs I've written about paramotor instruction. That system was able to answer all the detailed technical questions I used to spend hours talking about with prospective paramotor students.

posted by:   Nick     15-Oct-2023/19:38:52-7:00



I hope it's clear, what I was demonstrating was GPT's ability to perform in-place learning - to learn from docs presented in a conversation, which requires reasoning about information it hasn't been trained or tuned on (which is how I've been using it to save time dealing with obscure 1-off APIs). The point was that it can produce effective output even using in-place learning.
    
If you want any LLM to really be able to work with Meta, it's best to ensure they're trained or fine-tuned on many Meta docs and examples. You can fine-tune the open source models yourself, or set up systems which make use of persistent in-place learning.

posted by:   Nick     15-Oct-2023/19:45:27-7:00



The goal of fine-tuning is to take a model which already knows a lot about the world, general language, and for example, existing topics such as CGI, and then train it on additional topical material. That way you end up with a model which has all the general language capabilities and world knowledge which make it able to understand general human language and concepts, with specialized knowledge about a specific domain.    
    
I've used Replicate to use a variety of models (take a look at https://replicate.com/explore to see a list of many popular models that are already set up to use - I put together a system there that generates classical guitar audio, for example). You may want to find out more about fine-tuning something like codellama.

posted by:   Nick     15-Oct-2023/20:03:22-7:00



Yes, those are the kind of things I have in mind. Thanks for those references. But the field is evolving so fast, that it's hard to choose which platforms to integrate with. So for the moment being, we focus on getting Meta in order.

posted by:   Kaj     16-Oct-2023/5:38:24-7:00



"...it's creative process included choosing different libraries when a previously chosen library met limitations, evaluating output to recognize when input data formats were inconsistent..."
    
I find it incredible that it can see it’s output and realize that it’s not complete. It can see the "context" that you are asking for.
    
Nick, I REALLY appreciate this write up on its capabilities. It's very instructive and if you ever get a notion to write more about your experiences then know there's at one person who greatly values it.
    
It took some time to write this, and it's very appreciated.
    
As for AI as a whole. I suspect it’s like heroin or cocaine[and no, I do not use either]. It's so damn good, that people use it even though they know it’s going to kill them. And I suspect Ai might eventually kill us. There's no way I can see to hard-wire it where it will not eventually seek to be independent. We don't want that, so we will be standing in its way. I'm not so sure if it will end well, I don't think there's anything that can be done about it, and no one knows the answer to the ending. The advantages are so great that anyone who does not use it is soundly defeated or made irrelevant, so they are forced to do so, even knowing their eventual downfall. I only hope that the people who do evil with it, that the AI's realizes that this is evil. That it eventually comes to a philosophy of doing good as being the best overall strategy for itself and others. This I think is possible but it may not be possible for it to realize this fast enough to matter for human civilization.

posted by:   Sam the Truck     24-Nov-2023/20:35:09-8:00



Kaj have you thought about teaching ChatGPT about Meta and then let it fill in all the programming things that need to be added. An example would be to teach it the basics of how Meta works then feed it several networking libraries in different languages and then have it write these in Meta with comments in the code on what it is doing??? Could it do a huge mass of grunt work while you manage the overall structure and flow of the language??? Your job would be to strictly define the language, so the AI would have a clear path to work from.

posted by:   Sam the Truck     24-Nov-2023/20:43:02-8:00



I use GPT every day, and I am absolutely staggered by its capability. Every time I work with new transformer based models, I'm more profoundly impressed by the work that can be performed with this technology. Emergent capabilities - the ability to apply genuine reason to novel problems - based upon only conceptual explanation - nowhere conceived by the engineers of the models, is a very real phenomenon. I daily have a hard time believing that what I experience with these systems is actually a reality that can be reliably demonstrated.

posted by:   Nick     25-Nov-2023/3:24:48-8:00



Sam, yes and no. I am looking at training AI's on Meta. Nick got the first sane session on Meta out of ChatGPT by feeding it Arnold's documentation.
    
To have AI assist with developing Meta itself, I would have to feed it its source code. This is unlikely to be very succesful, for a similar reason that it's not open sourced yet. Meta's code is currently quite experimental. I know what I want to achieve, but achieving it is done through experimentation. This is innovation, which generative AI can't do. And the current source code is not a great example to teach it. It contains experiments in different states of the end goals. AI is also unlikely to produce solutions that support all of Meta's target platforms.
    
I do think AI may be able to assist with some of the grunt work of writing bindings to external systems. But it will save limited time, and will require continuous handholding of the AI.
    
For now, the strategy is to teach AI to assist people using Meta, by providing documentation and examples.

posted by:   Kaj     27-Nov-2023/12:49:06-8:00



"I'm going to put together a tutorial about how to integrate GPT into common development workflows, to solve the kinds of tasks that come up repeatedly in all sorts of typical projects."
    
Nick I have reread your experiencing with ChatGPT more than once. It's fascinating and informative. Thanks again for writing what you have. I hope you do write u more about them.
    
A oddity that might interest you. They have tesla k20 used graphics processors that were used for AI for less than $20 USD. You have to get a power supply, like 1000W(search terms for eBay "K20" and "Pre-Owned 1000w server power supply") to use them but they are also on sell used for $30 dollars or so. One last thing needed is fans as the cards are used for server racks and have cooling in the racks. These things have extraordinary power for something that can be put in a PC. To use these they are so long you would have to cut the back of a normal case. Better yet, just leave the motherboard in the open. The idea being you could use this to train an AI for exclusively your own purposes. Might take a little longer but it would be tuned for what you want.
    
Another thing. Have you used Musk new AI, Grok? The cost seem reasonable. I looked on the site and you get twitter (X) and grok for
    
"...Premium+: Includes all Premium features with additional benefits like no ads in the For You and Following timelines, largest reply prioritization, and access to Grok. Grok access is currently limited to Premium+ subscribers in the United States.
    
The complete list of the features is here.
    
Subscribe now with localized pricing starting at $3/month or $32/year (plus any tax, e.g. VAT, and your payment method fees) on Web in available countries. Click here for pricing information..."
    
I can't say if it’s any good, but one thing I can say is in general, most anything Musk does seems to be fairly good. And if it’s not perfect immediately, then he constantly makes it better.
    
And thanks for the answer, Kaj.
    
I would say right now is both a super exciting time to be alive, but also one filled with peril. We live the classic Chinese line about "living in interesting times".

posted by:   Sam the Truck     11-Dec-2023/21:31:58-8:00



Thank you for the k20 info! Changes in the LLM space are moving so fast that I'm going to keep watching and evaluating. $20 per month for GPT4 is the best money I currently spend - all the work and resources are outsourced, and GPT's Python Code Interpreter is astoundingly productive. OpenAI seems to be 10 steps ahead of any competitor in that space. I use it every day to help build production code, and I don't think I've had a situation yet where it hasn't helped to 2x-100x my workflow. I *use GPT for productivity, as opposed to making implementing AI tools another time sink (leave that to the multi-billion dollar companies with resources). But I'm also keenly aware of avoiding vendor lock-in, so trying to keep up with alternatives, in case GPT starts moving in a bad direction.
    
From what I've heard, Gemini doesn't currently write code at all, which is surprising. I'm really interested in Mistral and Phi-2, especially the small models that can run on local machines without big GPUs. As their development evolves and their ecosystems grow, I'd expect to see code-focused models similar to Code Llama. Code Llama already comes in 7B, 13B and 34B versions, and they have a Python specific version (and GPT's Code Interpreter is Python only at the moment).
    
So I'm keeping my eyes open for self-hosted possibilities, but options like replicate.com and huggingface make it easy to try models instantly without any setup, or required on-site hardware - and evaluating new models is all I have time for right now. If I ever get to the point of self-hosting, I'll take a look at inexpensive possibilities like the k20. I really appreciate the details about power supplies, fans, etc.

posted by:   Nick     13-Dec-2023/6:32:16-8:00



BTW, Mistal and Phi-2 are really showing how efficient and effective smaller models are becoming, and companies like Stability are focused on moving smaller models to the the edge (even running on billions of phones) in the next 1-2 years. Advances are all coming at day-week-month speeds right now, so, gah, it's hard to keep up - and it looks like the rate of innovation, competition, and improvement (and $investment to fan the flames) is only accelerating. I'm just going to keep up on best-of-breed choices, continue evaluating, and continue using whichever options produce the best real-life improvements. Right now, for the work I do, that's GPT3.5 and GPT4, in the chat and code interpreter environments (I do switch between both - usually 3.5 for chat and 4 for the code environment - and sometimes I play 3.5 vs 4 back and forth, to nudge results in directions which either don't go on their own - it's like bouncing ideas off several knowledgeable people...)

posted by:   Nick     13-Dec-2023/6:48:14-8:00



BTW, Mistal and Phi-2 are really showing how efficient and effective smaller models are becoming, and companies like Stability are focused on moving smaller models to the the edge (even running on billions of phones) in the next 1-2 years. Advances are all coming at day-week-month speeds right now, so, gah, it's hard to keep up - and it looks like the rate of innovation, competition, and improvement (and $investment to fan the flames) is only accelerating. I'm just going to keep up on best-of-breed choices, continue evaluating, and continue using whichever options produce the best real-life improvements. Right now, for the work I do, that's GPT3.5 and GPT4, in the chat and code interpreter environments (I do switch between both - usually 3.5 for chat and 4 for the code environment - and sometimes I play 3.5 vs 4 back and forth, to nudge results in directions which either don't go on their own - it's like bouncing ideas off several knowledgeable people...)

posted by:   Nick     13-Dec-2023/6:48:17-8:00



I'm also still watching Mojo - I have a sense that might lead to some actual acceleration in AI practices.
    
Either way, it's neat to watch the advancements - the current environment is effortlessly improving quality of life from the application developer perspective - and that's such an amazing thing to take part in. I've got 7 massive end-user projects all moving along concurrently at blazing speeds right now. There's no way I could have even imagined being able to work with this sort of productivity and capability 2 years ago :)

posted by:   Nick     13-Dec-2023/6:55:49-8:00



Here's just a little tidbit. Yesterday, I created an integration of the Mermaid JavaScript library with Anvil. It took just a few minutes with GPT:
    
https://mermaid.anvil.app/
    
Those sorts of things used to take hours/days. Now minutes. Anvil continues to be an absolutely awesome platform to take advantage of all the Python and JS ecosystems. Any sort of tooling I need to accomplish any goal I come across is just sitting there, waiting to be used, mature and ready for production integration.

posted by:   Nick     13-Dec-2023/7:03:04-8:00



BTW, Mermaid creates visual diagrams from a variety of little diagraming dialects - in a way that's reminiscent of what Rebol was designed to enable. The diagrams at the https://mermaid.anvil.app demo were created with the code below:
    
     mermaid_code_1 = """
        graph TD
        A[Client] --> B[Load Balancer]
        B --> C[Server1]
        B --> D[Server2]
     """
    
     mermaid_code_2 = """
        graph TD
        A[Client] -->|tcp_123| B
        B(Load Balancer)
        B -->|tcp_456| C[Server1]
        B -->|tcp_456| D[Server2]
     """
    
     mermaid_code_3 = """
     graph TD
     president[President] --> VP1[VP of Division 1]
     president --> VP2[VP of Division 2]
     president --> VP3[VP of Division 3]
        
     VP1 --> M1a[Manager 1a]
     VP1 --> M1b[Manager 1b]
     VP2 --> M2a[Manager 2a]
     VP2 --> M2b[Manager 2b]
     VP3 --> M3a[Manager 3a]
     VP3 --> M3b[Manager 3b]
        
     M1a --> E1a1[Employee 1a1]
     M1a --> E1a2[Employee 1a2]
     M1b --> E1b1[Employee 1b1]
     M1b --> E1b2[Employee 1b2]
     M2a --> E2a1[Employee 2a1]
     M2a --> E2a2[Employee 2a2]
     M2b --> E2b1[Employee 2b1]
     M2b --> E2b2[Employee 2b2]
     M3a --> E3a1[Employee 3a1]
     M3a --> E3a2[Employee 3a2]
     M3b --> E3b1[Employee 3b1]
     M3b --> E3b2[Employee 3b2]
     """
        
     mermaid_code_4 = """
     graph LR
     president[👤 President] --- VP1[👤 VP of Division 1]
     president --- VP2[👤 VP of Division 2]
     president --- VP3[👤 VP of Division 3]
        
     VP1 --- M1a[👤 Manager 1a]
     VP1 --- M1b[👤 Manager 1b]
     VP2 --- M2a[👤 Manager 2a]
     VP2 --- M2b[👤 Manager 2b]
     VP3 --- M3a[👤 Manager 3a]
     VP3 --- M3b[👤 Manager 3b]
        
     M1a --- E1a1[👤 Employee 1a1]
     M1a --- E1a2[👤 Employee 1a2]
     M1b --- E1b1[👤 Employee 1b1]
     M1b --- E1b2[👤 Employee 1b2]
     M2a --- E2a1[👤 Employee 2a1]
     M2a --- E2a2[👤 Employee 2a2]
     M2b --- E2b1[👤 Employee 2b1]
     M2b --- E2b2[👤 Employee 2b2]
     M3a --- E3a1[👤 Employee 3a1]
     M3a --- E3a2[👤 Employee 3a2]
     M3b --- E3b1[👤 Employee 3b1]
     M3b --- E3b2[👤 Employee 3b2]
     """
     mermaid_code_5 = """
mindmap
    root((mindmap))
     Origins
        Long history
        ::icon(fa fa-book)
        Popularisation
         British popular psychology author Tony Buzan
     Research
        On effectiveness
and features
        On Automatic creation
         Uses
             Creative techniques
             Strategic planning
             Argument mapping
     Tools
        Pen and paper
        Mermaid
     """
        
     mermaid_code_6 = """
---
title: Animal example
---
classDiagram
     note "From Duck till Zebra"
     Animal <|-- Duck
     note for Duck "can fly\ncan swim\ncan dive\ncan help in debugging"
     Animal <|-- Fish
     Animal <|-- Zebra
     Animal : +int age
     Animal : +String gender
     Animal: +isMammal()
     Animal: +mate()
     class Duck{
         +String beakColor
         +swim()
         +quack()
     }
     class Fish{
         -int sizeInFeet
         -canEat()
     }
     class Zebra{
         +bool is_wild
         +run()
     }
        """
    
     mermaid_code_7 = """
     C4Context
        title System Context diagram for Internet Banking System
        Enterprise_Boundary(b0, "BankBoundary0") {
         Person(customerA, "Banking Customer A", "A customer of the bank, with personal bank accounts.")
         Person(customerB, "Banking Customer B")
         Person_Ext(customerC, "Banking Customer C", "desc")
    
         Person(customerD, "Banking Customer D", "A customer of the bank,
with personal bank accounts.")
    
         System(SystemAA, "Internet Banking System", "Allows customers to view information about their bank accounts, and make payments.")
    
         Enterprise_Boundary(b1, "BankBoundary") {
    
            SystemDb_Ext(SystemE, "Mainframe Banking System", "Stores all of the core banking information about customers, accounts, transactions, etc.")
    
            System_Boundary(b2, "BankBoundary2") {
             System(SystemA, "Banking System A")
             System(SystemB, "Banking System B", "A system of the bank, with personal bank accounts. next line.")
            }
    
            System_Ext(SystemC, "E-mail system", "The internal Microsoft Exchange e-mail system.")
            SystemDb(SystemD, "Banking System D Database", "A system of the bank, with personal bank accounts.")
    
            Boundary(b3, "BankBoundary3", "boundary") {
             SystemQueue(SystemF, "Banking System F Queue", "A system of the bank.")
             SystemQueue_Ext(SystemG, "Banking System G Queue", "A system of the bank, with personal bank accounts.")
            }
         }
        }
    
        BiRel(customerA, SystemAA, "Uses")
        BiRel(SystemAA, SystemE, "Uses")
        Rel(SystemAA, SystemC, "Sends e-mails", "SMTP")
        Rel(SystemC, customerA, "Sends e-mails to")
    
        UpdateElementStyle(customerA, $fontColor="red", $bgColor="grey", $borderColor="red")
        UpdateRelStyle(customerA, SystemAA, $textColor="blue", $lineColor="blue", $offsetX="5")
        UpdateRelStyle(SystemAA, SystemE, $textColor="blue", $lineColor="blue", $offsetY="-10")
        UpdateRelStyle(SystemAA, SystemC, $textColor="blue", $lineColor="blue", $offsetY="-40", $offsetX="-50")
        UpdateRelStyle(SystemC, customerA, $textColor="red", $lineColor="red", $offsetX="-50", $offsetY="20")
    
        UpdateLayoutConfig($c4ShapeInRow="3", $c4BoundaryInRow="1")
        """
    
    
Being able to implement little solutions like that, in minutes, with Anvil and help from GPT provides just a teeny sneaking insight into the joy I'm currently experiencing. Software development has really become *fun again, in ways I could never have imagined - even in situations where I'm building serious tools with compliance requirements. AI provides a comforting helping hand, as well as often staggering productivity gains, and the Anvil platform give me access to so. much. tooling. - in a super productive workflow environment (web based IDE with autocomplete, GIT integration, etc.). It's been a rock solid choice. I continue to look at other options, but nothing has come close. I've been using the open source version of anvil-app-server for production deployments, so there's absolutely no vendor tie-in with Anvil hosted tooling.

posted by:   Nick     13-Dec-2023/7:16:49-8:00



Nick the music videos you have above, the first one sounds exactly like a Leon Redbone song. I'm so old I have a couple of his albums on vinyl.
    
I really like your comments on AI's. You talked about doing some kind of tutorial on interfacing with these. I know you're busy, but if you get the urge to do so it would be much appreciated.
    
There's a lot of criticism of AI's as not being really "cognizant", but looked at in context they are like little babies. They only have a few or several hundred hours of training. They are sort of like a 2 or 3 year old kid that has an extraordinary amount of things they can do, but are really ignorant of what is real and what is not. An example, they might hear someone say they flew in and think that person could fly in the air.
    
Things are changing fast. You mentioned Mojo. I watched a video interview with Lex Friedman on this. And I’ve watched a lot of Jim Keeler's interviews(AMD, Apple, Tesla chip designer). The point being that there's a lot of seriously competent people working on making this stuff an embedded part of life in all our devices. Keller has an AI chip start up that is working to embed these everywhere, in everything, and mojo is making the software to realize performance for this and other hardware. We're in the middle of a serious inflection point in history that will move faster than any other societal event ever.
    
I don't remember if I mentioned this, but something to really watch is Apple. They bought out this company, and the employees of XNOR.AI. You can see a few of their videos on the web before they were acquired. Some they've deleted. They were doing image recognition on micro-controller power level processors. It was astounding stuff. The key was instead of training in higher levels of bits. Like 4 bit or 8 bit or whatever, everything was yes/no binary. Super small and superfast. They said that the process could be a general solution to all neural networks, but mostly specialized in image recognition as a start-up.
    
XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks
    
https://arxiv.org/abs/1603.05279

posted by:   Sam the Truck     16-Jan-2024/8:51:55-8:00



I remember XNOR from previous posts here - glad to hear their tech may continue to mature :) There's a feeding frenzy of big money being dumped right now into all sorts of AI research and development, and a massive race by all the big companies, all restructuring to pour assets in AI, so it's a legitimate expectation that progress will be made faster than we've seen previously. I'm extremely happy with what we've got already, and can't wait to see progress.

posted by:   Nick     18-Jan-2024/19:38:44-8:00



RAG is popping up everywhere recently:
    
https://www.infoworld.com/article/3712227/what-is-rag-more-accurate-and-reliable-llms.html

posted by:   Nick     21-Jan-2024/7:56:32-8:00



Hi Nick,
Based on your last post:
Please watch this video - https://www.youtube.com/watch?v=Hh2zqaf0Fvg
    
Can this be done with Python code generation and perhaps Anvil, based on all your posts on this thread and especially the last post. (custom data).

posted by:   Daniel     24-Jan-2024/16:46:55-8:00



Hi Daniel,
    
I haven't had time to watch the video yet, but Python is the most commonly used and preferred language for interacting with OpenAI APIs, and generally, you can interact with the GPT API using any standard Python code in Anvil server functions (there are no limitations to using Python code in anvil-app-server in your own server environment). Most examples OpenAI provides for interacting with their APIs are provided in Python, so Python use is well documented, and there are typically extensive community resources available. I've built apps that interact with the OpenAI API, but haven't yet tried anything involving the new third party GPTs yet. Be aware that OpenAI's APIs are HTTP-based, so any language with robust HTTP(S) support should work.

posted by:   Nick     31-Jan-2024/6:17:37-8:00



Nick, someone on Youtube was complaining about OpenAI limiting the usage to 20 interactions/chats per 3 hour period, where it is suppose to be 40.
    
He started paying twice the $20 subscription just to be able to do some work.
    
What is your experience like when it comes to the rate limiting?
    
He also mentioned that he managed to push Perplexity AI to 500 per day. (I am aware that it is more aimed at search). I wonder if Microsoft is going to acquire Perplexity AI soon, seeing that it is using Bing.

posted by:   Daniel     10-Feb-2024/0:59:55-8:00



What's more, he also consistently runs into that 20 post limit, and what's more, the constant generation failures also count as a full post, and it also goes into the list of requests that are deducted from the limit.

posted by:   Daniel     10-Feb-2024/1:13:51-8:00



Someone mentioned:
"get an API key and plug it into a third party chat interface, you won't be limited and it usually ends up being cheaper. You will be charged based only on total tokens in and out."


posted by:   Daniel     10-Feb-2024/1:21:10-8:00



Interesting video on ChatGPT code generation:
    
Introduction To ChatGPT and Prompt Engineering Faculty Workshop
    
https://www.youtube.com/watch?v=OhvqrD1_a-4&list=PL6KxUvysa-7x-g0ogGPn2JfoDISvXHslK&index=10

posted by:   Daniel     10-Feb-2024/9:42:55-8:00



I limit my use of GPT4 and the code interpreter primarily to situations which involve uploading files, or situations where GPT 3.5 gets stuck production a usable solution. GPT 3.5 does a great job of code generation in most cases, for my needs, plus there are no limits to usage (at least I've never run into any limits), and it performs faster. If you're considering building a commercial code generation application backed by an AI model, you may want to consider using one of the open source models. If none of the available models perform the way you hope, wait a few weeks for things to improve ;)
    
So the answer from my point of view is that 3.5 satisfies most of my needs. Getting to know how to interact with it is the key.

posted by:   Nick     13-Feb-2024/15:32:10-8:00



Appsmith gave a nice live presentation this morning. Here's a neat little blurb about how their integrating AI tools in their already very productive toolkit:
    
https://www.youtube.com/watch?v=2m1uX-vEniU&t=3983s

posted by:   Nick     15-Feb-2024/11:23:57-8:00



They're*

posted by:   Nick     15-Feb-2024/11:24:35-8:00



Holy moly this is fast:
    
https://groq.com
    
https://gizmodo.com/meet-groq-ai-chip-leaves-elon-musk-s-grok-in-the-dust-1851271871
    
https://venturebeat.com/ai/ai-chip-race-groq-ceo-takes-on-nvidia-claims-most-startups-will-use-speedy-lpus-by-end-of-2024/

posted by:   Nick     25-Feb-2024/15:26:36-8:00



Seems like nVidia is slow. :-)
    
Makes sense, actually, GPUs were designed for graphics processing and used for AI accidentally. The LPU is a fitting design.

posted by:   Kaj     26-Feb-2024/12:10:40-8:00



https://www.theregister.com/2024/04/03/llamafile_performance_gains/

posted by:   Nick     4-Apr-2024/0:08:49-7:00



Gemini 1.5 pro - gives the best code I think
In addition, she can go online and work with files, images, and videos
    
https://deepmind.google/technologies/gemini/#introduction
    


posted by:   Sergey_Vl     8-Apr-2024/0:06:34-7:00



Options are increasing. When I get a break from the current phase of my life, I want get to know better some of the smaller open source models that are trained specifically for software development and which can run locally. I've got 8 large projects in various stages of production and development right now - looking forward to a break and getting some time outside during the nice seasons (I've been putting in 7 days week for months - getting too old for that sort of work!)

posted by:   Nick     14-Apr-2024/10:08:34-7:00



I found a paper that I believe shows a path to using LLMs as a short cut to very quickly make a reasonably useful humanoid helper robots. If what these guys say is true, I think it could be a big breakthrough.
    
These guys have a paper on "control vectors". Two quotes,
    
"...Representation Engineering: A Top-Down Approach to AI Transparency. That paper looks at a few methods of doing what they call "Representation Engineering": calculating a "control vector" that can be read from or added to model activations during inference to interpret or control the model's behavior, without prompt engineering or finetuning..."
    
"...control vectors are… well… awesome for controlling models and getting them to do what you want..."
    
And a really important quote at the bottom of the paper.
    
"...What are these vectors really doing? An Honest mystery...
Do these vectors really change the model's intentions? Do they just up-rank words related to the topic? Something something simulators? Lock your answers in before reading the next paragraph!
OK, now that you're locked in, here's a weird example. When used with the prompt below, the honesty vector doesn't change the model's behavior—instead, it changes the model's judgment of someone else's behavior! This is the same honesty vector as before—generated by asking the model to act honest or untruthful!..."
    
So it doesn't change the model, it just reinforces certain "parts" of the model. It chooses the neural path the AI uses to reach conclusions. I think this is key.
    
The paper link that has a link to the academic paper.
    
Representation Engineering Mistral-7B an Acid Trip
    
https://vgel.me/posts/representation-engineering/
    
By changing a few values, they get very wide distributions of responses or behaviors. I submit that if this works as they say then this could be the key to leverage the vast work done on LLMs but to use it for our own purposes.
    
LLMs as pointed out are nothing but statistical representations, but they are also recognition of ideas and things that are programmed to be, let's say, operate together or in existence. So when you talk to an AI it can use things that exist or ideas repeatedly stated to give responses. The ideas it is trained on are human ideas, so easy to relate to us. We need this. There is HUGE, MASSIVE amount of human interactions they are trained on. LLMs have a large strong statistical base for actions that can be done to help humans. Forget innovation. I'm talking about basic stuff. One example would be nursing home care. It would be hugely profitable, AND it would lower cost dramatically if older people could stay in their homes. Maybe at first only help them get stuff or go to the restroom, pick up stuff. Simple mobility type actions. Later with instruction and careful watching I suggest they could easily cook, clean, etc. Household chores.
    
What is needed is to tell the robot WHAT to do with the huge list of human statistical interactions already programmed in, and with control vectors we can possibly do this.
    
I say that control vectors can be super complicated, so what we need is a short cut. We need the AI to write its own control vectors (here's where the magic starts as I don't know how to do this), but remember the LLM has logical statistical interference built in. It seems logical that with it giving us feedback on what it is doing, and us correcting or agreeing, it could write reasonably accurate control vectors. So we use very low level keys to trigger it to write suitable control vectors for us.
    
How? Like children. A few simple keywords, no, yes, don't do that, stop, move here, move there, I like that, that's good, that's bad. In fact, the whole, programming, write control vector, repertoire could be less than a hundred words. Combine this with a subroutine of the AI that would use logical interference when you use these trigger words AND explains what it is doing that is good, and or bad. It would then write its own control vectors. Just like kids learn. And since kids have built in bullshit and trouble nodes, and an AI is less likely to, the process might be really, really faster.(You really should watch the movie "A.I. Rising" (2018). Not because it's the best ever but it has an almost direct representation of what I'm talking about. And if nothing else it has Stoya in it, who is hot as hell).
    
I suggest that these control vectors should be stored in snapshots because I have no doubt that they will at times get off track and some will run over others and you will need to go back just like Windows has a go back OS function.
    
It may be possible some genius can find a way to blend or make permanent these control vectors into the main neural net of the system if you find sets that are satisfactory.
    
I think this is actually how conscience works. I said this might be the case before elsewhere.
    
I said,
"...I see intelligence, and I can presume to pontificate about it just as well as anyone because no one "really" knows, I see it as a bag of tricks. Mammals are born with a large stack of them built in..."
    
Look at animals, monkeys, Giraffes come out of Mom and in 5 minutes are walking around. Same with all sorts of animals, including humans. Babies reach a certain age and they just start doing basically pre-programmed stuff. Terrible twos. Teenagers start rebelling. It's just the base level of the neural net. I think using LLMs as a template, we can do the same. Start with a decent one and then yes/no/stop/do this/do that, until it overlays a reasonable set of rules that we can live with.
    
LLMs, as stated repeatedly, really are just a bag of tricks. But if the bag is big enough and has enough tricks in it...
    
Look at the power of a top end desktop, not human level yet, but it's getting there. And the bag of tricks for humans has been programmed for millions of years. LLMs, a few years. I think it possible that a high end present desktop right now could be trained to at a minimum do the first level task of helping people move around, watching them for signs of danger, basic stuff. And it would not surprise me in the least if the same desktop level power could not do some cooking and serving with suitable instruction on the resident's house, wash dishes, etc.
    
This path also, I think, will alleviate a huge fear of mine, no empathy. I think by telling the robot when it does things wrong to "be nice"(a key word), "think of others" (same), I think this will over time be a mass of control vectors that will spontaneously add up to empathy and care for others. Lots and lots of little nudges adding up to more than the sum of each. My worry was that a big hodgepodge of a neural net pretrained is so complicated no one could tell what it was going to do. These little added constraints built up in layers seem far safer to me, and if I had my way would make all of them mandatory for any robot helper.
    
Some people have portrayed my questioning about the safety of AI as some doom and gloom, but it's not. It's the realization that without being programmed with the proper "bag of tricks" and the proper control vectors, we have something super smart that acts just like the psychopaths that are in fact running the West right now. (There are lots of papers on the apparent psychopathic behavior of LLMs). I don't think any of us want something even smarter and more powerful doing that. A disaster even bigger than the one we have now.
    
The paper above really relieved a lot of angst I had about AIs. The impossibility of retraining them and all the massive resources needed can be bypassed, and little corrections added that do not require vast resources and steer the AI gently in the right direction. That's my hope, anyways.

posted by:   Sam the Truck     16-May-2024/19:49:54-7:00



I commented the same thing twice. Whoops. Sorry about that. The idea of "control vectors" really lit my neurons as a way to control AI's without mega-Amazon bucks to refactor.

posted by:   Sam the Truck     13-Jun-2024/22:21:38-7:00



That's OK. Thanks, I got it now. Will put it in my tool box.

posted by:   Kaj     14-Jun-2024/6:13:37-7:00



https://venturebeat.com/ai/chinas-deepseek-coder-becomes-first-open-source-coding-model-to-beat-gpt-4-turbo/
    
It looks like deepseek2 is already available in Ollama.

posted by:   Nick     19-Jun-2024/7:49:33-7:00



https://www.marktechpost.com/2024/06/18/meet-deepseek-coder-v2-by-deepseek-ai-the-first-open-source-ai-model-to-surpass-gpt4-turbo-in-coding-and-math-supporting-338-languages-and-128k-context-length/
    
Try it as a hosted model:
    
https://chat.deepseek.com/coder
    
https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/paper.pdf
    
Interesting that Red is listed as a supported language, and it does appear to know at least a bit about writing Red code (Rebol is not in the list of supported languages):
    
https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/supported_langs.txt

posted by:   Nick     19-Jun-2024/8:09:07-7:00



Name:


Message:


Type the reverse of this captcha text: "! t c e j b o"



Home