Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Profiler results are now sent to Redis #17459

Merged
merged 1 commit into from
Mar 8, 2022
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 25 additions & 1 deletion code/controllers/subsystem/profiler.dm
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,13 @@ SUBSYSTEM_DEF(profiler)
var/fetch_cost = 0
/// Time it took to write the file (ms)
var/write_cost = 0
/// Time it took to encode the data for redis (ms)
var/send_encode_cost = 0
/// Time it took to send the stuff down FFI for redis (ms)
var/send_ffi_cost = 0

/datum/controller/subsystem/profiler/stat_entry()
..("F:[round(fetch_cost, 1)]ms | W:[round(write_cost, 1)]ms")
..("F:[round(fetch_cost, 1)]ms | W:[round(write_cost, 1)]ms | SE:[round(send_encode_cost, 1)]ms | SF:[round(send_ffi_cost, 1)]ms")

/datum/controller/subsystem/profiler/Initialize()
if(!GLOB.configuration.general.enable_auto_profiler)
Expand All @@ -37,14 +41,34 @@ SUBSYSTEM_DEF(profiler)
// Write the file while also cost tracking
/datum/controller/subsystem/profiler/proc/DumpFile()
var/timer = TICK_USAGE_REAL
// Fetch info
var/current_profile_data = world.Profile(PROFILE_REFRESH, format = "json")
fetch_cost = MC_AVERAGE(fetch_cost, TICK_DELTA_TO_MS(TICK_USAGE_REAL - timer))
CHECK_TICK
if(!length(current_profile_data)) //Would be nice to have explicit proc to check this
stack_trace("Warning, profiling stopped manually before dump.")
var/json_file = file("[GLOB.log_directory]/profile.json")
// Put it in a file
if(fexists(json_file))
fdel(json_file)
timer = TICK_USAGE_REAL
WRITE_FILE(json_file, current_profile_data)
write_cost = MC_AVERAGE(write_cost, TICK_DELTA_TO_MS(TICK_USAGE_REAL - timer))

// Send it down redis
if(SSredis.connected)
// Encode
timer = TICK_USAGE_REAL

var/list/ffi_data = list()
ffi_data["round_id"] = GLOB.round_id
// We dont have to JSON decode here. The other end can worry about a 2-layer decode.
// Performance matters on this end. It doesnt on the other end
ffi_data["profile_data"] = current_profile_data
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Going to see how expensive encode/decode is here, as it may decrease send time due to the send size being much smaller from not having tons of extra escapes in
image

Copy link
Member Author

@AffectedArc07 AffectedArc07 Feb 27, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update:

lol no
image

Yes the send cost is lower, but the encode cost halted DD for 4 whole seconds. We're not doing that.

var/ffi_string = json_encode(ffi_data)
send_encode_cost = MC_AVERAGE(send_encode_cost, TICK_DELTA_TO_MS(TICK_USAGE_REAL - timer))

// Now actually fire it off
timer = TICK_USAGE_REAL
SSredis.publish("profilerdaemon.input", ffi_string)
send_ffi_cost = MC_AVERAGE(send_ffi_cost, TICK_DELTA_TO_MS(TICK_USAGE_REAL - timer))