@zlopez:fedora.im
=================

2025-11-24 08:54:00
- Setup OpenID only ipsilon instance on production ([ticket](https://pagure.io/fedora-infrastructure/issue/10241)) - ipsilon03 instance is now deployed and working. There is still issue with haproxy redirect to user page
- Deal with AI scrapers on pagure.io
- release-monitoring.org - Please drop sass-listen mapping to Fedora's rubygem-listen ([ticket](https://github.com/fedora-infra/anitya/issues/1977))
- Process PDR request

2025-11-21 09:31:00
- Setup OpenID only ipsilon instance on production ([ticket](https://pagure.io/fedora-infrastructure/issue/10241)) - tested on the potential solution shared in https://bugzilla.redhat.com/show_bug.cgi?id=2415883 and it worked
- Toddlers: "initial_commit": false not respected in releng/fedora-scm-requests ([ticket](https://pagure.io/fedora-infra/toddlers/issue/362)) - tested it out in staging, just waiting for reporter to confirm
- Prepare slides for BCP (Business Continuity Plan) exercise
- releaese-monitoring.org: gimp gitlab url ([ticket](https://github.com/fedora-infra/anitya/issues/1926))

2025-11-20 08:41:00
- Add jgroman to jira_sync
- Review community blog post
- Prepare slides for BCP (Business Continuity Plan) exercise - will meet with BCP team today to clear some things
- Setup OpenID only ipsilon instance on production ([ticket](https://pagure.io/fedora-infrastructure/issue/10241)) - blocked on https://bugzilla.redhat.com/show_bug.cgi?id=2415883

2025-11-19 08:40:00
- I&R weekly report
- Finish Gemini course
- Process PDR requests
- src.fp.o is unavailable 503
- Investigate cloud-image-uploader error (reached to jcline as it looked like something in azure related code)
- Setup OpenID only ipsilon instance on production ([ticket](https://pagure.io/fedora-infrastructure/issue/10241))


@gwmngilfen:fedora.im
=====================

2025-11-24 10:22:00
- RabbitMQ Zabbix template deployed to staging

- manual for now, need to Ansible it
- already picked up two queues with issues in stg (fmn and bodhi) 🦾
- fixed minor issues with zabbix playbook on oci-registry01
- some planning / prep

2025-11-21 09:35:00
- firmware proxy deployed to noc01, waiting on network team
- ended up doing a lot of troubleshooting yesterday, especially on an MQTT issue in CentOS
- meetings:
- warranties
- 2x internal 1:1s
- weekly infra meeting

2025-11-20 09:50:00
- networking ticket for proxy created
- proxy config tested more, found some issues. mostly finished rewriting it, need to deploy it today
- got derailed by meetings from 11am to 6pm 😕 
- Mentee meeting
- CentOS sprint planning
- Coding CLub
- FRCL infra discussion

2025-11-19 10:11:00
- tested mailman3 change - failed. the option I want is only in a newer django-allauth package. looking at upgrading that.
- wrote a [PR](https://pagure.io/fedora-infra/ansible/pull-request/2967) for shushing zabbix during a host upgrade. needs testing and adding to the other reboot scripts
- wrote a [PR](https://pagure.io/fedora-infra/ansible/pull-request/2968) for proxying firmware traffic from iDRAC->Dell
- some meetings


@lenkaseg:fedora.im
===================

2025-11-21 11:32:00
- tested new features (ignored groups + email notifications) of cleaning_packager_groups in staging, deployed to production, confirmed it works - received an email, user not removed from sysadmin-main group
- co-created new weekly report format, published first report of this kind to CommBlog
- mentoring - issue selection, setting up access, etc.

2025-11-20 08:44:00
- forgejo routine maintenance: rebased main branch on latest forgejo v13.0, resolved a merge conflict

- Fixed konflux pull request syncing from github to codeberg, changed it to open PRs and not force push and fixed security issue limiting the PRs to only those opened by  the konflux bot, tested and it works.
https://codeberg.org/fedora/oci-image-definitions/pulls/22
https://codeberg.org/fedora/oci-image-definitions/src/branch/main/.github/workflows/gh_pr_sync.yml
- mentoring 


@arrfab:fedora.im
=================

2025-11-21 13:37:00
- Changed softwarefactory sponsorship members (https://gitlab.com/CentOS/infra/tracker/-/issues/1807)
- Ensured we'd have backup from Netapp in rdu2 (from rdu3 through temporary wireguard tunnel and custom rsyncd module) - https://gitlab.com/CentOS/infra/tracker/-/issues/1812
- Fixed (thanks @gwmngilfen) the mqtt issue for signing process on cbs.centos.org (https://gitlab.com/CentOS/infra/tracker/-/issues/1808)

2025-11-20 13:19:00
- Fixed the postgresql DB saturation due to koji upgrade (see upstream related ticket https://pagure.io/koji/issue/4175)
- Implemented a workaround for releng sign+push process (not blocking anyone now)
- Sprint infra planning session

2025-11-19 12:25:00
- Migrated cbs/signing server to RDU3 (hybrid cloud model with compute nodes in remote isolated AWS VPC) - https://gitlab.com/CentOS/infra/tracker/-/issues/1792


@james:fedora.im
================


@nirik:matrix.scrye.com
=======================

2025-11-21 17:57:00
- Tuned some more anubis stuff
- Updated/rebooted/upgraded a bunch of things, helped out in outage
- vHMC is happy again after vmhost-x86-02 was updated/rebooted. Hurray! 🎊
- Both stg and prod openshift now on 4.20.x. There's a issue with logging in prod however.
- Got network acls live for pagure-stg testing.

2025-11-20 16:57:00
- Bunch more attempts to get the vHMC happy, no luck. Will revisit after reboots
- synced data from pagure-stg01 to pagure-stg02, waiting for network acls to test
- updated/rebooted signing and backup servers.
- Adjusted new anubis for iot
- Upgraded stg/prod openshift clusters to latest 4.19.x, then did stg to 4.20.x

2025-11-19 16:29:00
- Spent a ton of time on trying to get the network stable on the vHMC without much luck. Will keep trying
- Looked over all the anubis reports from the last upgrade, tested policy in staging
- Rolled out latest anubis and policy in prod and watched logs.
- Got pagure-stg02 ansiblized


@patrikp:matrix.org
===================

2025-11-19 11:12:00
- Took RHCSA, unfortunately NO PASS, intend to retake sometime in December.
- We (RelEng) had a sprint planning/retro call yesterday.


