It might be easier to have your software create a sketch for the software for it to upload. Let their software do the heavy lifting of getting it there.
Unlikely. As far as I know, the IDE does not support assembly language directly and that is my output.
There is somebody out there working on the opposite approach - translating Python to C which may work better with the Arduino IDE.
I have no interest in translating Python to C.
https://groups.google.com/a/arduino.cc/d/msg/developers/Le0DXv0tm9Y/yq4KUXwSW-QJ
You can include a binary in it …
What I have found to work is to create an Arduino sketch having the same name and compile it. If I later overwrite that binary with mine, clicking "Upload’ in the IDE uploads mine. The key is for the binary to be newer than the .INO file containing the sketch.
It looks like you can specify the compiler here … wouldn’t it be cool to write python in the editor provided and by choosing the right platform have it compile with your compiler and upload?
This is interesting
We said that the Arduino IDE determines a list of files to compile. Each file can be source code written in C (.c files), C++ (.cpp files) or Assembly (.S files). Every language is compiled using its respective recipe:
recipe.c.o.pattern - for C files
recipe.cpp.o.pattern - for CPP files
recipe.S.o.pattern - for Assembly files
sounds like you might be able to create recipe.py.o.pattern
Thank you! That is one of the missing links.
The other is:
-
Generate assembly language in gas/AT&T format which I despise
-
or teach my assembler to generate ELF object files and use that.
Not sure if this would help but NASM uses the Intel syntax. and the elf format is fairly simple to “spoof”: http://timelessname.com/elfbin/
https://www.muppetlabs.com/~breadbox/software/tiny/teensy.html
dd if=/dev/stdin of=tiny.asm
; tiny.asm
BITS 32
org 0x00010000
db 0x7F, "ELF" ; e_ident
dd 1 ; p_type
dd 0 ; p_offset
dd $$ ; p_vaddr
dw 2 ; e_type ; p_paddr
dw 3 ; e_machine
dd _start ; e_version ; p_filesz
dd _start ; e_entry ; p_memsz
dd 4 ; e_phoff ; p_flags
_start:
mov bl, 42 ; e_shoff ; p_align
xor eax, eax
inc eax ; e_flags
int 0x80
db 0
dw 0x34 ; e_ehsize
dw 0x20 ; e_phentsize
db 1 ; e_phnum
; e_shentsize
; e_shnum
; e_shstrndx
filesize equ $ - $$
^D
$ nasm -f elf tiny.asm
$ gcc -Wall -s -nostartfiles tiny.o
$ ./a.out ; echo $?
42
$ wc -c a.out
45 a.out
I found those pages when initially starting to learn the ELF format. My usual approach to learning a file format is to write a dumper, both to help understanding what every little bit means and to have a tool to check my work when I begin creating files in that format.
Creating a simple file which passes muster is a long way from generating something a linker can use.
Background information about linking:
One of the big things I have yet to master in ELF, relocations:
And a lot of the nitty gritty of the format:
It may be due to my using a several years old version of the IDE, but I cannot find those recipe pattern files.
I believe you need at least version 1.5… at least that is what the title of the page I sent said
Also
I am running 1.6.5. It is probably over two years old, but it can do Wemos, so the support for installable cores is there in some form.
It has come to my attention that Python uses round to negative infinity when dividing signed integers.
Most traditional compilers either use round to zero or they do not specify. I believe division hardware uses round to zero as well.
The difference is how the quotient is rounded; the remainder is adjusted accordingly.
Dividend Divisor Round toward zero Round toward negative infinity
Quotient Remainder Quotient Remainder
7 3 2 1 2 1
-7 3 -2 -1 -3 2
7 -3 -2 1 -3 -2
-7 -3 2 -1 2 -1
Unfortunately, this complicates the division code a little bit.
A detailed description:
While still working out how to integrate the variable-precision integer code into the compiler (I am starting to think this really is brain science or rocket surgery), I decided to start picking off some of the other things on my list.
First up is completing the function call interface. Previously, I was able to call “print” as a procedure, but not as a function. That is, not from within an expression.
Now, this code
print(print(1))
yields this output:
1
None
The obvious follow-up is to throw together a skeletal “peek” function.
The following code
print(peek(526))
print(peek(527))
print(peek(528))
print(peek(529))
yields this output:
-87
1
-94
0
To save you the trouble of converting,
526 = $20E
-87 = $A9
-94 = $A2
And finally, the generated code for one of the lines;
00098 ; 00001 print(peek(526))
020E A9 01 [2] 00099 lda #1
0210 A2 00 [2] 00100 ldx #0
0212 20 0C2D [6] 00101 jsr AllocPargsAndKargs
0215 A0 03 [2] 00102 ldy #3
0217 A2 1A [2] 00103 ldx #I_00526&$FF
0219 A9 24 [2] 00104 lda #I_00526>>8
021B 20 0C93 [6] 00105 jsr StoreParg
021E A2 26 [2] 00106 ldx #_peek&$FF
0220 A9 24 [2] 00107 lda #_peek>>8
0222 20 0C15 [6] 00108 jsr GetObject
0225 20 0CD8 [6] 00109 jsr object.__call__
0228 20 0CB5 [6] 00110 jsr FreePargsAndKargs
022B A5 17 [3] 00111 lda PtrA+1
022D 48 [3] 00112 pha
022E A5 16 [3] 00113 lda PtrA
0230 48 [3] 00114 pha
0231 A9 01 [2] 00115 lda #1
0233 A2 00 [2] 00116 ldx #0
0235 20 0C2D [6] 00117 jsr AllocPargsAndKargs
0238 A0 03 [2] 00118 ldy #3
023A 68 [4] 00119 pla
023B AA [2] 00120 tax
023C 68 [4] 00121 pla
023D 20 0C93 [6] 00122 jsr StoreParg
0240 A2 1D [2] 00123 ldx #_print&$FF
0242 A9 24 [2] 00124 lda #_print>>8
0244 20 0C15 [6] 00125 jsr GetObject
0247 20 0CD8 [6] 00126 jsr object.__call__
024A A0 02 [2] 00127 ldy #2
024C 20 0CA3 [6] 00128 jsr DerefParg
024F 20 0CB5 [6] 00129 jsr FreePargsAndKargs
To be honest, that is a whole lot of code. I wonder how it compares to the speed of interpreted BASIC.
Now to do “poke” so that Draco can have his self-modifying code…
Edit: instrumentation in my emulator reports that this line of code
print(peek(526))
uses 10979 machine cycles. That is outputting through a virtual UART with no wait time between characters.
Further edit: Instead of taking the entire line of code, it is much more informative knowing how much time just the peek takes. That turns out to be only 1649 of those 10979 cycles. The bulk was taken up by the monstrous formatting code in print.
Actually, I don’t need self-modifying code BUT if someone wants to access any of the hardware of the machine, they will need it
Once poke, pass and bitwise and is available, I will be trying something like
while peek(-12287) & 8 == 0:
pass
poke(-12288, peek(-12288))
Any guess on what that does?
-12287 = $D001 = 53249
C64/C128
$D001 (53249) Y-Coordinate Sprite#0 Sets horizontal position of Sprite#0
Apple IIe, IIc, IIgs, and II+ with Applesoft ROM Language Card
$D000 - $F7FF (53248 - 63487): Applesoft Interpreter
I’m not certain at all … one thing to note is that most pokes and peeks do not refer to negative memory … yes yes I know it is signed as opposed to looking at it unsigned … but just a suggestion but it might be best not to allow negative pokes and peeks
Looks like it is waiting on bit 3 to be 1, then right back data in the next previous address
Could be anything … what is it for? What system?
peek ignores the part of the number above the low two bytes, so -12288 is effectively the same as $D000. Numbers in Python are always signed, plus most of the BASIC programming books I remember treated numbers 32K or larger as negative, so I did it that way.
My emulator, as currently configured, has a virtual UART (serial port) at base address $D000, so that code waits for a character (repeatedly checking the input status bit), then reads and echoes it.
You may have noticed I used the phrase “skeletal peek function” before. That was because I intended to throw together the absolute minimum necessary to get an address from the caller and return the value at that memory location. It did none of the other things that a well-behaved Python function has to do like checking all passed arguments and issuing an error message if there was anything missing (or extra) as well as verifying that the address passed was an integer and not a string or a list. It just assumed that everything was correct to get something working.
That code is done correctly now and it is quite a bit of it. This is something unique to Python and a handful of other languages where this stuff is dealt with at run time instead of compile time. You get flexibility and generality, but at a cost.