Open Source Research Center profile
Open Source Research Center
Open Source Research Center
Open Source Research Center is creating a community for creative people and also projects, and tutorials
Subscribe
Message

Features

  • Since the discord bot here demands your soul otherwise refuses to work, send me(@RPBCACUEAIIBH) a message on discord after you subscribed to give you the "Supporter" role!
Open Source Research Center
Public post

Yesterday I made a chart with Li-Ion settings for EASUN POWER 60A MPPT Solar Charge Controller v1.34 (aka ICharger-MPPT-6048).

These settings are not tested, so use them at your own risk I will update the chart as needed during my tests. https://docs.google.com/spreadsheets/d/1jXfu50zDEAuesd85Bvsh4YKZ42yKbZYofKetM8fhrpk/edit?usp=sharing

Comments  loading...
Like(0)
Dislike(0)
Sign Up or Log In to comment on this post
Open Source Research Center
Public post

A Bash Script Vulnerability and Solution

  • Bash executes commands one by one even from a script as if it comes from the terminal input (unless you group them in curly brackets) a script can be edited while running, and it executes the edited content.

  • If a script is sudo executed by an admin and is writable by the admin(or group or anyone), it is also writable by any other running process that belongs to whoever has write access without requiring root permissions.

  • This means that a malicious process can patiently wait until the sys admin runs one of his/her own scripts that is writable by the sys admin, without root permissions, and edit it while running to gain root access. (This means a sys admin can't be confident that the script he/she vetted or wrote, will do whatever it supposed to unless he/she makes it completely write protected before execution.)

  • Temporarily revoking write access is a solution to protect against that, however killing the process will prevent it from reinstating write access. (Which can be annoying, but a sys admin who has the right to run it as with sudo, should also be able to make it writable again. Either way I'd rather get annoyed then rooted because of negligence.)

  • I reported this to [email protected] I got several answers downplaying the severity, emphasizing the inadequacy or possible side effects of raised solutions, shifting responsibility, etc. until the conversation about it died out. It does not look like it's gonna be fixed any time soon, so I decided to put this up on github in an attempt to raise awareness and offer an individual workaround for those who don't wanna fall for this.

https://github.com/RPBCACUEAIIBH/Bash-Script-Vulnerability-And-Solution

Comments  loading...
Like(0)
Dislike(0)
Sign Up or Log In to comment on this post
Open Source Research Center
Public post

I finally managed to get this multi-platform C++ library development tool working on Arduino Pro-Mini (AVR), 12F generic module (ESP8266), Raspberry Pi Pico (RP2040) microcontrollers, and PC (Ryzen 5) architectures.

Now I can work on a unified C++ library mostly compatible with all of those platforms, and probably more, even though some architecture specific things can not be avoided.

https://github.com/RPBCACUEAIIBH/LibLab

Comments  loading...
Like(0)
Dislike(0)
Sign Up or Log In to comment on this post
Open Source Research Center
Public post

Hexa PA has been updated.

  • v0.3.0 is a major update, that comes with fixes after an API update broke it, available models updated to the latest gpt-3.5-turbo-0125 and gpt-4-turbo-preview, and the context management has been bufeed.

  • When set to GPT 3.5 Turbo, Max Input Tokens = 0 (unlimited), Max Context Messages > 0, Max Output Tokens > 0 it switches seemlessly between 4k(latest) and 16k(legacy) models depending on how long the messages are.

  • It's no longer an even split between input an output tokens, it can now take advantage of GPT-4 Turbo's 128K context even though it only does 4k output. (Successfully tested with more then 4k context.)

Comments  loading...
Like(0)
Dislike(0)
Sign Up or Log In to comment on this post
Open Source Research Center
Public post

Since my last update, I implemented: - OpenAI came out with new models on 13th of June and HexaPA now uses the latest GPT-3.5 model, and it's new 16K variant. You can switch to longer context by defining higher then 2048 token limit, up to 8192. (HexaPA only does even split between input and output tokens for now...) - Reference injection - injects a reference to the rules into the prompt (only the prompt but not into the context) that makes the AI a bit more likely to follow the rules. It can be defined on the rules sceen. - The rules can now be exported to JSON, but not used as preset yet. (This feature is not finished, I just wanted to release an update with the new models, and longer context lenght ASAP.) - I also set up a github project page for HexaPA development plans, you can now see real time what features I'm working on, what is planned, and what is done arlready: https://github.com/users/RPBCACUEAIIBH/projects/7/views/1

As always you can download and test HexaPA from here: https://github.com/RPBCACUEAIIBH/HexaPA

Comments  loading...
Like(0)
Dislike(0)
Sign Up or Log In to comment on this post
Open Source Research Center
Public post

Update

  • Since my last update, I have renamed the project to HexaPA, due to pending trademark of the abbreviation "GPT" by OpenAI. PA suffix is a better name for it anyway, whether it stands for "Productivity Accelerator" (which is the whole purpose of this software) or "Personal Assistant".
  • I added tooltips that show up at the bottom of the screen when the mouse hovers over a button, entry or textbox.
  • I also added export feature, which exports it to JSON for now. I plan to add CSV, and a simple plain text support, as well. (JSON is vital for those who may want to preserve conversations when updating...) The last 2 images shows what rules and context ware sent to the AI in the debug output, and also that the export contains the block IDs for the same rules and context for the prompt. (The highlighted stuff on both images.) Thus the exported JSON can be used to fully reconstruct and/or analyze the conversation. This may be essential if it is ever used for training data, or to investigate why the AI responded a certain way since the context can be cherry picked by the user, so it's no longer just a linear conversation...
  • I updated the token counter to work with generated context, as it was still not counting the context tokens.
  • I also restructured some code, which is not a visible change, but vital for maintenance.
  • I fixed a token limit bug. (It was limiting individual messages to the set value not the entire rules + context + prompt.)
  • And finally published the first pre-release, since it is now a fairly usable AI chat application like ChatGPT.

Available on github: https://github.com/RPBCACUEAIIBH/HexaPA

Comments  loading...
Like(0)
Dislike(0)
Sign Up or Log In to comment on this post
View next posts (6 / 8)

Star Stats

9 posts

Goals

$950
to reach
the Goal
1 Year goal The bare minimum that I have to reach in a year otherwise I may have to do something else or make major changes maybe put everything behind a pay wall... This covers: - Domain and storage for the OSRC webpage - Internet and electricity bills - My salary at the current local minimum wage Any excess will be re-invested into equipment, and materials.
$1,000,000
to reach
the Goal
Keep it coming, I'll find a way to use it! :) (Just kidding! I'd prefer not to disclose exact amounts other then the absolute minimum...)

Other stars

WE USE COOKIES

SubscribeStar and its trusted third parties collect browsing information as specified in the Privacy Policy and use cookies or similar technologies for analysis and technical purposes and, with your consent, for functionality, experience, and measurement as specified in the Cookies Policy.

Your Privacy Choices

We understand and respect your privacy concerns. However, some cookies are strictly necessary for proper website's functionality and cannon be denied.

Optional cookies are configurable. Disabling some of those may make related features unavailable.

We do NOT sell any information obtained through cookies to third-party marketing services.