Authorguda
We are migrating from Cloudflare to route53 and want to make sure that all the records are migrated correctly.
records = %w(
www
blog
)
records.each do |record|
aws = `dig +noall +answer www.example.com |tr '\t' ' ' | cut -d' ' -f1,3,4,5`
cf = `dig +noall +answer www.example.com @1.1.1.1|tr '\t' ' ' | cut -d' ' -f1,3,4,5`
if aws != cf
puts("\nerror:")
puts(aws)
puts(cf)
end
end
Do you enjoy switching between us-east-1 and eu-central-1? If not here are some shortcuts which might make your day.
function change_region(){
folder_name=`pwd`
new_region=$1
old_region=$2
new_folder="${folder_name/$old_region/$new_region}"
cd $new_folder
}
function production(){
change_region 'terraform_production' 'terraform_development'
}
function development(){
change_region 'terraform_development' 'terraform_production'
}
function us(){
change_region 'us-east-1' 'eu-central-1'
}
function eu(){
change_region 'eu-central-1' 'us-east-1'
}
If you want free hosting, I mean free, not-cheap or other kinds of hosting. The only option which I know for now is Heroku.
There are some websites which advertise them as free rails hosting but they are paid.
So stay safe with the good old, slow on boot but very stable and enjoyable to work Heroku!
Hello folks,
You have just used the pgloader to import some database in postgres but when you do
space_util=# \dt
Did not find any relations.
The problem is that you have to fix the search path for your tables.
Here is how to do it (or check the link for more ways)
space_util=# ALTER DATABASE space_util SET search_path = space_util,public;
space_util=# \dt
List of relations
Schema | Name | Type | Owner
------------+----------------------------------------------------------+-------+----------
space_util | some_nice_table | table | postgres
…or one-day just to add one line of code
s3fs.S3FileSystem.cachable = False
Adding caching under the good and not mentioning it in the documentation – that is called a dirty trick.
My case was lambda processing s3 files. When a file comes on s3 lambda process the file and triggers next lambda. The next lambda works fine only the first time.
The first lambda is using only boto3 and there is no problem.
The second lambda use s3fs.
The second invocation of the lambda is using already initialized context and the s3fs thinks that it knows what objects are on s3 but it is wrong!
So…. I found this issue – thank you jalpes196 !
Another way is to invalidate the cache…
from s3fs.core import S3FileSystem
S3FileSystem.clear_instance_cache()
s3 = S3FileSystem(anon=False)
s3.invalidate_cache()
I have switched from Google Chrome to chromium for security and privacy issues. Now I am switching from Chromium to Firefox because of many issues.
Chromium stopped to ship deb packages and start using Snapd. Snap runs in cgroup (probably) and hides very important folders from the OS
- /tmp
- ~/.ssh
Certificates
My access to some payment website was rejected because the certificates are in the ~/.ssh
System tmp
When I download some junk files/attachments I store them in the /tmp folder and on the next system reboot, my /tmp is cleaned. When I can’t access the /tmp from Chrome I have started using ~/tmp/ and have tons of useless files.
Speed
When I switch to firefox I noticed that this browser is much faster than Chrome.
Chromuim after migrating to snapd do not work correctly with dbus
Firefox is faster
No easy way to add a custom search engine.
© 2025 Gudasoft
Theme by Anders Norén — Up ↑