JavaScript Math.floor(Date.now() / 1000) Python int(time.time()) Go time.Now().Unix() Rust std::time::SystemTime::now().duration_since(std::time::UNIX_EPOCH).unwrap().as_secs() Ruby Time.now.to_i PHP time() Bash date +%s PowerShell [int][double]::Parse((Get-Date -UFormat %s)) SQL Postgres SELECT EXTRACT(EPOCH FROM NOW())::INT SQL MySQL SELECT UNIX_TIMESTAMP()
What a Unix timestamp is
A Unix timestamp is the number of seconds elapsed since 00:00:00 UTC on Thursday, 1 January 1970 — known as the Unix epoch. It's monotonic (always increasing), independent of timezone (the same instant produces the same number anywhere on Earth), and stored as a simple integer. These three properties are why nearly every operating system, programming language, database, and protocol uses it as the internal representation of time.
The display you see in your application is converted FROM the Unix timestamp INTO whatever human format makes sense (your local timezone, your locale's date format, a relative phrase like "3 hours ago"). The conversion is one-way: a Unix timestamp tells you the exact instant, but the formatted string drops information (the timezone offset, sub-second precision, etc.). Always store timestamps as integers; only format them at display time.
Seconds vs milliseconds vs microseconds
The original Unix spec used seconds, but most modern languages and APIs now use milliseconds (JavaScript's Date.now(), Java's System.currentTimeMillis(), .NET's DateTimeOffset.UtcNow). The rule of thumb: if the number has 10 digits, it's seconds; 13 digits, milliseconds; 16 digits, microseconds; 19 digits, nanoseconds. The tool above auto-detects.
Seconds 1736500000 (10 digits — Unix classic) Milliseconds 1736500000000 (13 digits — JS / Java / .NET) Microseconds 1736500000000000 (16 digits — Python time_ns / 1000) Nanoseconds 1736500000000000000 (19 digits — Go time.UnixNano)
Confusing seconds and milliseconds is the #1 source of bugs around timestamps. A timestamp value of 1736500000 sent to JavaScript's new Date(...) produces a date in 1970 instead of 2025, because new Date expects milliseconds. Multiply by 1000 first.
Common gotchas
- The Year 2038 problem. 32-bit signed integers can represent timestamps only up to
2147483647= 03:14:07 UTC on 19 January 2038. Beyond that, the value wraps to a negative number representing 1901. Modern systems (every 64-bit platform, all major databases) use 64-bit timestamps that won't overflow until year 292 277 026 596 — practical safety. The risk is in embedded systems, legacy MySQLTIMESTAMPcolumns (useDATETIMEinstead), and JavaScript code that does arithmetic on timestamps asint32. - Leap seconds. Unix time technically does not count leap seconds — it pretends every day has exactly 86400 seconds. This means during the rare leap-second insertion, the timestamp stays still for one second instead of incrementing. For 99.9% of applications this is invisible. For high-precision logging or financial timestamps, you need a leap-second-aware clock (TAI).
- Negative timestamps represent dates before 1970. Most date libraries handle them correctly, but some legacy code assumes positive values and produces garbage.
- Daylight Saving Time is irrelevant to Unix time. A Unix timestamp is always UTC seconds since epoch — DST is a display concern. If your bug appears "on the spring forward day", it's almost certainly in your local-time formatting layer, not in the timestamp itself.
- JavaScript's
Date.parseis lenient and unpredictable.Date.parse("2026-01-15")works;Date.parse("01-15-2026")works in some browsers and not others. For predictable parsing, use ISO 8601 strings (2026-01-15T10:30:00Z) only, or a library like Day.js / Luxon.
Practical workflows
- Debugging logs with epoch timestamps: paste into this tool, see the human time. Then back-calculate which other events happened nearby in your logs.
- Setting an expiration in a JWT — JWT's
expclaim is Unix seconds (not milliseconds, despite many other JS APIs using ms). Use the tool to set the value precisely. - Cron-job debugging: parsing the "next run time" output from many cron parsers gives you a Unix timestamp; convert it to verify it's in the right timezone and at the expected wall-clock time.
- Database queries:
WHERE created_at > 1735689600filters records since 2025-01-01 UTC. Avoid string comparisons of dates — they're slow and brittle. - Bug reports from users with timestamps in odd formats: paste here to convert, compare with your system's UTC time at the moment of the report.
Common use cases
- Decode timestamps from log files
- Set JWT exp claims
- Database query date filters
- Debug cron next-run times
- Compare timestamps across timezones
Frequently asked questions
How do I tell seconds from milliseconds?
Count the digits. 10 digits = seconds (Unix classic). 13 digits = milliseconds (JavaScript Date.now). 16 digits = microseconds. 19 digits = nanoseconds. The tool auto-detects.
What is the Year 2038 problem?
32-bit signed integers can only represent timestamps up to 2147483647, which is 03:14:07 UTC on 19 January 2038. Beyond that the value wraps to negative. Modern systems (every 64-bit platform, all major databases) use 64-bit timestamps and aren't affected.
Are leap seconds counted?
No. Unix time treats every day as exactly 86400 seconds — during a leap-second insertion the timestamp stays still for one second instead of incrementing. For 99% of applications this is invisible.
Why does my JavaScript Date show 1970 when I pass a Unix timestamp?
JavaScript's Date constructor expects MILLISECONDS, not seconds. Multiply by 1000 first: <code>new Date(unixSeconds * 1000)</code>.