pyspark.sql.functions.unix_timestamp#
- pyspark.sql.functions.unix_timestamp(timestamp=None, format='yyyy-MM-dd HH:mm:ss')[source]#
 Convert time string with given pattern (‘yyyy-MM-dd HH:mm:ss’, by default) to Unix time stamp (in seconds), using the default timezone and the default locale, returns null if failed.
if timestamp is None, then it returns current timestamp.
New in version 1.5.0.
Changed in version 3.4.0: Supports Spark Connect.
- Parameters
 - timestamp
Columnor str, optional timestamps of string values.
- formatstr, optional
 alternative format to use for converting (default: yyyy-MM-dd HH:mm:ss).
- timestamp
 - Returns
 Columnunix time as long integer.
Examples
>>> spark.conf.set("spark.sql.session.timeZone", "America/Los_Angeles")
Example 1: Returns the current timestamp in UNIX.
>>> import pyspark.sql.functions as sf >>> spark.range(1).select(sf.unix_timestamp().alias('unix_time')).show() ... +----------+ | unix_time| +----------+ |1702018137| +----------+
Example 2: Using default format ‘yyyy-MM-dd HH:mm:ss’ parses the timestamp string.
>>> import pyspark.sql.functions as sf >>> time_df = spark.createDataFrame([('2015-04-08 12:12:12',)], ['dt']) >>> time_df.select(sf.unix_timestamp('dt').alias('unix_time')).show() +----------+ | unix_time| +----------+ |1428520332| +----------+
Example 3: Using user-specified format ‘yyyy-MM-dd’ parses the timestamp string.
>>> import pyspark.sql.functions as sf >>> time_df = spark.createDataFrame([('2015-04-08',)], ['dt']) >>> time_df.select(sf.unix_timestamp('dt', 'yyyy-MM-dd').alias('unix_time')).show() +----------+ | unix_time| +----------+ |1428476400| +----------+
>>> spark.conf.unset("spark.sql.session.timeZone")